Amend: to change or modify (something) for the better: improve (Merriam Webster)
While this definition refers to the colloquial rather than legal use of ‘amend,’ there is a reason why changes to the United States Constitution are referred to as a derivative of this word. Amendments to the Constitution reflect a change and improvement in our core values as a nation. What started out as ten amendments (for all of you history buffs crawling out of your skin, I am aware that the original U.S. Constitution had twelve amendments, but for now we will refer to the original ratified version) has nearly tripled to 27. As history has played out, it has become very clear that ten meager amendments were not nearly enough to protect people from the stark violations of human rights that would ensue in America. This country has experienced a civil war, a women’s suffrage movement, a civil right’s movement, a movement against prohibition, and countless other demonstrations to arrive at the 27 amendments that are now codified in the most highly authoritative document in the United States. While it must be noted that a paragraph written on a fancy piece of paper has often done little to actually ensure equality and fairness in the lives of many Americans, what gives a Constitutional amendment its gravity is its precedence. An amendment is not merely a quick solution to a temporary issue. An amendment does not come to fruition as the result of a single administration, or a politician trying to bolster their agenda. An amendment knows no political party or faction (in theory, at least). An amendment weaves a value into the fabric of our nation. An amendment is the result of the American people coming together and unanimously deciding an issue must be regarded above all others — that it is so crucial and fundamental to the core values of the nation that it cannot simply be addressed by a single piece of legislation, or quite frankly, any number of laws. But most importantly, an amendment gives citizens a sense of protection (again, in theory).
And now, in subtle fashion, let us completely step away from history and into computer science to discuss the meaning of a blackbox algorithm. This is a term that I would be willing to bet money you did not hear in your eighth grade U.S. history class, but I assure you is crucial to what we are going to discuss. The phrase blackbox comes from, well, precisely what it sounds like. Imagine you have a box that is black (I’m starting to sound like a broken record), but more importantly, opaque. You cannot see inside this box whatsoever — you do not know any properties of this box other than the fact that it is black. Say that this magic black box of yours has one purpose: identifying fruits. You feed it a picture of a fruit, and it spits out ‘banana’ or ‘orange’ or some other kind. This mystical little oracle is decently good at its job — it almost always correctly identifies the fruit you input. Now, you show this black box to one of your friends, who is fascinated at how such a simple contraption can be so intelligent. They demand to know the secret. How does this box know the difference between a cantaloupe and a watermelon? But you cannot give them an answer, because, as the name so cleverly implies, you cannot see into and understand this box. You trust it to accurately identify fruits, but you cannot explain why it correctly identifies fruits when it does, or, more importantly, why it occasionally does not perform in the way you intend.
It is safe to say that the stakes of this magical box faulting are low — maybe you take a slight blow to your pride when you go to show it off to others and it gets a fruit wrong. But this very concept of using an unexplainable mechanism, specifically a decision-making algorithm powered by artificial intelligence, is no stranger to the world we live in, and unfortunately has detrimental, life-altering consequences to the people who fall victim to its use. In a world where artificial intelligence’s sole purpose is to label fruits, using an algorithm in which a scientist cannot necessarily understand how it works or why it makes decisions the way that it does is seemingly harmless. But, for better or for worse, we live in a world in which artificial intelligence has seeped into nearly every sector of society. Automated algorithms are used to expedite the hiring process, make college admission decisions, decide jail sentences, make loan predictions — decisions that have the potential to drastically alter the course of one’s life. To leave these decisions to a machine that cannot be explained or understood by even top computer scientists is not only careless, but has incredibly detrimental implications for individuals and society. These algorithms that we unquestioningly trust to be true are the very ones that are turning away poor minorities from necessary loans at disproportionately high rates, rejecting qualified women from high-paying jobs on the basis of sex, and rejecting bright, what would be first generation, college students from getting into prestigious universities.
I will be the first to admit that as a computer scientist, blackbox algorithms are undoubtedly appealing. Personally, I am someone who enjoys the applications of computer science much more than the theory. So when my professors permit me to simply write “use algorithm x as a blackbox” to receive full credit on an exam question, I do not think twice. We as engineers enjoy when a solution is quick, formulaic, and accepted without question or explanation. It can feel like it is saving us precious time and mental energy. This, however, is an exceedingly dangerous mindset to adopt. We cannot favor the solutions that require minimal critical thinking, the ones that are a simple given to be taken as true. We cannot wrap a complex artificial intelligence algorithm with millions of parameters that has the power to destroy the lives of many in a dainty bow that magically absolves us from actually understanding the technology and its potential detriments. To us, we call it blackboxing. But to the millions of people whose fate gets unfairly determined by these opaque machines, it is so much more. It is a denial of opportunity. But perhaps more importantly, it is a denial of why they were critiqued in the way that they were, leaving them helpless against machines that we as engineers have refused to understand and explain.
Not So Equal Protection Under the Law
I am not trying to argue that there is a hierarchy of importance of knowing how one’s data is being used. However, I will say that if the example I used to justify the necessity of an unalienable right to explanation is that I am uncomfortable with my internet browser knowing my specific favorite brand of hot sauce and catering advertisements to me accordingly, the proposal of this amendment would almost certainly be vanquished along with the thousands of other amendment proposals that have surfaced since the ratification of the Constitution. But, unfortunately, the ramifications of having no such right codified into law reach far beyond a spicy condiment.
Let us say that you are an immigrant who has just arrived to America from an imaginary foreign nation called MITLand. You are not a strong English speaker and exhibit a strong MITLandic accent, and due to America’s inherent prejudice towards people from MITLand (while this is an imaginary country, we know all too well that there is systemic prejudice towards those with foreign accents in America), it is very difficult for you to find a job. Time is ticking, as you arrived with a finite amount of savings, but have an endless list of expenses to pay — rent, utilities, food, transportation, etc. But you breathe a sigh of relief as you apply for a bank loan that will keep a roof over your head while you continue to search for a job. The thought of being denied this loan has not even crossed your mind — you pay your bills on time, you have built up credit during your time in America, you have no criminal history. Essentially you have given the bank no reason to worry that you would default on your loan. You fill out the application, and right as the last of your savings begins to dry up, you receive news that, against all odds, you were denied the loan. You are helplessly left cashless with bills looming over your head, but more importantly, you are left confused. Any sound loan officer would be able to confidently assert that, based on your history, you are a more than trustworthy loan applicant. How is it that despite taking the right steps, you were denied this necessary aid?
It turns out that it is because there was no ‘sound loan officer’ evaluating your trustworthiness to repay a loan in the first place. You were not denied ‘against all odds,’ because if you reapplied for a loan with equally promising records, you would be denied again. And again. And again.
Let us paint the picture of how this happened. The bank, receiving thousands of applicants for loans every year, does not have the time nor resources to individually evaluate each person in a timely manner. So, they do what all companies who are seeking to modernize, revolutionize, and optimize their business do — use an algorithm powered by artificial intelligence. Specifically, this bank deploys an algorithm that takes in a wide array of data points about a given individual and outputs a score ranking their trustworthiness to pay back a loan. It is up to the bank to decide several features, namely what metrics and data are evaluated to determine the score, how each of these are weighted in the calculation of the score, and what cutoff decides whether or not an individual is trustworthy enough to receive a loan.
Let us say that the loan officer designing this algorithm (assume that this loan officer is multi-faceted and is savvy with computer science) has a particular internalized prejudice against people from MITLand. While the Civil Rights Act of 1964 prohibits this racist loan officer from explicitly denying people loans on the basis of race, he has discovered an unfortunate loophole. He knows the town very well, and happens to know that a particular zip code within the town has a disproportionately high rate of MITLandic residents. This de facto segregation of sorts allows the loan officer to program a racist feature into the algorithm: that residents from this particular zip code are not trustworthy applicants, and thus one’s zip code should either highly penalize or greatly boost their chances of receiving a loan. Zip code is not a legally protected class, but we know all too well that in many places in America, zip code has a nearly direct correlation with race. So, while still technically being legal, you have been denied a necessary bank loan based almost solely on your race. You risk eviction, homelessness, and food insecurity, all because of a discriminatory algorithm that is legal in the eyes of the American justice system. And, because you were turned away, this bank never finds out that you actually were a trustworthy applicant, or in mathematical terms, a false negative. You are a valuable data point, one that would cause any machine learning algorithm to re-analyze its parameters, figure out why you were misclassified, and correct itself so that this does not happen again in the future (and thus hopefully putting an end to the preprogrammed prejudice). But, this critical feedback is never fed to the algorithm, leading it to believe that unfairly rejecting MITLandic people is ‘correct,’ which only further perpetuates this automated discriminiation on a grand scale.
Furious, terrified, and seeking answers, you begin to research why you have been denied this loan. Upon doing so, you come to learn that your neighbors, who are also MITLandic and in dire need of a loan, have also been denied. They have a similar history of paying bills on time, and are also confused as to why they were rejected. Additionally, you have a friend who works for this bank, who has recalled several occurrences of this particular loan officer making racist comments about MITLandic folk, and boasting how he has found a way to systematically turn them away from his bank.
Bingo. You have a valid court case on your hands, right? Well, not necessarily. As much research and evidence as you may have gathered in your defense, in the end you are still ultimately going against a machine. Particularly, a machine whose purpose is to think beyond the level of human cognition, a machine that discovers trends in data in ways that humans cannot comprehend, a machine that performs computations that would take the entire span of a human’s lifetime in a matter of seconds. In other words, a machine that is supposedly superhuman. Suddenly, your evidence is as good as an anecdote. What, on your end, was once thorough research is now baseless claims to the court. And even worse is that there is no mandate in place that requires this evidently racist loan officer to explain this algorithm that has drastically altered your stability, security, and life overall. Due to its esteemed nature, this pernicious algorithm is trusted by the court to be fair and true. It is a blackbox. But it is not one that innocently identifies fruits. It is one that systematically derails people’s lives — people like you who are trustworthy, hard-working, and happen to be MITLandic.
The protections America already has in place are suddenly rendered useless — how can you argue that what has happened to you is a violation of the Fourteenth Amendment or the Civil Rights Act; that you have been discriminated against on the basis of race when there is no requirement on behalf of the loan officer to disclose how this algorithm makes its decisions? The protections America has set in place to equalize its citizens under the law become obsolete, and not because of a flaw in how these protections were framed, but rather because of the absence of a new protection, a protection whose necessity has surfaced only recently, but acutely — the right to explanation.
Why an Amendment and not a Law?
Now we are back where we started — amendments. Doesn’t it sound nice? Tacking a “right to explanation” onto the end of the Constitution, thereby preventing situations like the one above from ever occurring. Well, it is more complex than that. Passing any proposed amendment is incredibly difficult and requires unanimity from all across the political spectrum — a nearly unthinkable concept in our current climate. Specifically, an amendment must be approved by a two-thirds supermajority in each house of Congress, and then must be ratified by three-fourths of the legislatures of the fifty states. Considering that our Senate is split precisely down the middle, and that party loyalty has assumed a nearly primary role in deciding how legislators vote, it is highly unlikely that any proposed amendment, no matter how bipartisan, will ever be ratified in our lifetime.
Aside from the amendment process itself, there lie many valid counterarguments to this vague, blanket right to explanation. In the example above, even if a machine was not used, the bank teller would not necessarily be required to explain his decision to you. If humans are not always required to explain how they make decisions, then why should algorithms be held to a higher standard of fairness than humans? And if we do hold these algorithms that are increasingly embedding themselves into society to such a high standard, are we not then stunting the growth and development of artificial intelligence as a field of research? If we have clear records of the ways in which artificial intelligence algorithms have denied marginalized groups from life, liberty, and the pursuit of happiness, then why not pass legislation to address these specific injustices — a solution that is both more explicit and immediate? Is the approach of a Constitutional amendment too vague for the dangers posed by destructive algorithms? Could an engineer not simply release the millions of intricately interconnected parameters and data points that comprise their algorithm to the courts — information that does provide insight about how an algorithm makes decisions, but is not in any way comprehensible to the court or the victims — and satisfy the right to an explanation, thus absolving them from the automated inequity in their machines? And, extrapolating upon this, is it technologically feasible to expect the engineer themselves to understand their own algorithms? After all, artificial intelligence models are designed to be smarter than humans, identify trends in data better than humans, and most importantly, organize and restructure themselves in a way that maximizes performance. If a model consists of millions of inputs, hidden layers, and activation functions interconnected in a convoluted web, and these parameters are adjusted and fine-tuned by their own doing rather than that of the engineer, how could we expect said engineer to explain to a single victim how their individual data was used to make a singular decision?
While these considerations are crucial, I believe that it is actually the very nature of both Constitutional amendments, as well as algorithms powered by artificial intelligence that create the ideal amendment proposal.
Two main facets that distinguish an amendment from a law are its intentional vagueness, as well as its ability (and necessity) to be indefinitely relevant and applicable. Let us use the example of an amendment that has been addressed in this article: the Fourteenth Amendment. The Fourteenth Amendment was ratified in 1868 during the Reconstruction Era immediately following the Civil War. While it was ratified in response to recently emancipated African Americans still facing clear discrimination and inequality under the law, its applications have extended to other marginalized groups. In the 2015 landmark Supreme Court case, Obergefell v. Hodges, the court ruled that the liberty ensured by the Due Process Clause and the equality ensured by the Equal Protection Clause of the Fourteenth Amendment imply that same-sex marriage bans are unconstitutional. Without the intentional vagueness of the amendment, it would be useless in this particular instance that arose 147 years later. While the amendment was ratified out of an urgency to protect African-Americans, it purposely did not enumerate the specific marginalized groups that must be protected under the law. In typical Constitutional amendment fashion, it framed a fundamental moral — that every American must be equal in the eyes of the law — in such a way that allowed members of the LGBTQ+ community, a group that legislators did not have in mind when writing the Fourteenth Amendment, to use it a century and a half later to address their specific situation. And time will only tell if another marginalized group unforeseen by the drafters of the Fourteenth Amendment will use it for their sake. The amendment simply frames the value, leaving specific, unanticipated situations to be addressed by future laws or court cases.
It is these very two properties that I claim would make a right to explanation of how algorithms reach their decisions a robust and effective amendment. As cliche as this word is, the rate of development of artificial intelligence is truly unprecedented. Even just one decade ago, artificial intelligence did not play the integral role it currently does in society and individual lives. In our day-to-day processes — commuting to work, buying groceries at the store, checking on our friends on Facebook — we are generating millions of data points that are used to make predictions and to figure out new sectors of life in which artificial intelligence can be applied. It is constantly evolving, and doing so at a quicker rate than we as humans have been able to keep up with morally. We are constantly discovering new ethical injustices caused by artificial intelligence, but we are doing so after the fact and once it has already harmed a large group of people. While lawmakers can try to remedy this by passing legislation that addresses these injustices, these laws are not actually tackling the root issue, and that is that engineers are deploying artificial intelligence algorithms that evolve so rapidly that they cannot explain how the models make their decisions. And even if they did know, they are in no way required to disclose this information.
We need to establish a protection that is unalienable. We need to ensure that as this revolutionary, groundbreaking field of artificial intelligence continues to advance and evolve in unpredictable ways, there is a security that is constant — one that is both vague and applicable enough that it cannot be undermined by some future, unforeseen development of AI. That is not so far-fetched considering how much the field has grown in the past twenty years alone. The reality is that we humans have barely scratched the surface of the capabilities of artificial intelligence. If we are already experiencing a lack of understanding and accountability of engineers for their models and algorithms, and the complexity of artificial intelligence as we currently know it is relatively low, the responsibility of engineers to understand their technology, as well as its potential consequences imposed on society, are going to worsen exponentially. We cannot continue to be complacent with engineers using the shield of a blackbox to deflect all moral and ethical responsibilities. We have seen what this has done to millions of undeserving people, and we have yet to see how the scale and magnitude of these injustices are going to compound with (very little) time.
While it may be inevitable that putting an ethical constraint on engineers is going to inhibit the growth of artificial intelligence, the tradeoff is marginal technical advancement for equal opportunity, equality under the law, and minimal harm to humans. We have seen how an unchecked growth of artificial intelligence has harmed people on a grand scale, and how artificial intelligence can not only afford, but needs a hindrance so that humans can keep up morally.
We know it is not always feasible for an engineer to explain how a single decision was made among millions of parameters and data points. But that is precisely the beauty of the vagueness of the amendment. A right to explanation does not necessarily entail an elaborate, end-to-end illustration of the model, from person X’s data entering the model to the decision being output. It still requires the engineer, however, to know the ins and outs of all of the ethical decisions that they made when designing the algorithm — how different features are weighted, whether the training data is representative and fair, if there were inherent prejudices preprogrammed into the original model — and to be able to explain these before court and the people being harmed by the model. It instills a sense of accountability into the engineer from the very beginning of the design process, knowing full well that at any moment they could be constitutionally required to explain prejudices — or lack thereof — programmed into the model.
And finally, yes, we are ultimately holding machines to a higher standard than humans. But let me ask this: why should a machine that is designed to perform at a superhuman level not be held to a higher standard than that of humans? The entire purpose of deploying artificial intelligence models is to perform tasks and computations at a scale and efficacy of which humans are incapable. While humans, like machines, are flawed, they cannot inflict damage at the rate, magnitude, and scale that machines can. It baffles me that the expectation is to equalize these vastly different beings. Maybe it is contrary to popular belief, but I welcome this higher standard of regulation on machines that wield destructive power far beyond what is fathomable by humans.
A constitutional right to explanation is by no means the be-all, end-all solution to the mass injustices posed by artificial intelligence. It leaves much to be interpreted, failing to iron out the specific details and enumerate the situations in which it is applicable. There is still possibility for engineers to muddy their explanations of their algorithms in a way that satisfies the amendment but fails to actually mend any discrimination. And aside from the specifics of this amendment, at the end of the day, it is penalized for being just this — an amendment. This alone poses its own unique and daunting set of obstacles.
But we must not let the difficulty of ratifying a constitutional amendment keep us from having the conversation. We live in a new digital era powered by artificial intelligence, and it is clear that the protections set in place over the past, essentially digitally-free, 250 years are no longer enough to protect people from these revolutionary, but equally threatening, machines. The opacity of algorithms plus prejudice plus making of life-altering decisions is a formula to which we must not continue to succumb. We need an amendment that frames our values and rights among this digital society, a frame that is vague enough to still be applicable in the future as artificial intelligence continues to shapeshift and evolve in unpredictable ways. Yes, it will be necessary for legislation and court cases to address specific situations, but the frame is the first step. It will create a culture of accountability and ethical consideration. It will give people a sense of security in a world where machines determine our fate beyond our control, and a knowledge of how they can improve upon the critiques coming from these machines. It is what will keep us as a society valuing the lives of We The People over the prospects of technology.