The intersection of Web3 and artificial intelligence has emerged as a hot button topic. Despite the current frenzy, this fusion of these emerging technologies is not a new trend. There are experts and innovators who have been straddling the two sectors for years. As generative AI continues to dominate the news, the insights of these bilingual, cross-practitioners are particularly relevant as the complements of AI and blockchain are increasingly explored.

But what of equity and inclusion in this union? AI is an almost $100 billion market and crypto is estimated at $1 trillion. The White house is touting voluntary commitments from AI companies “to help move toward safe, secure, and transparent development of AI technology.” It begs the question, will that level the playing field in the innovation economy?

Madam AI

Congresswoman Yvette Clarke (NY-09), a former vice chair of the U.S. House Committee on Energy and Commerce, has been sounding the alarm about the potential impacts of AI long before today’s seemingly sudden mainstream FOMO outbreak. The New York Representative introduced “The REAL Political Ads Act” this year to expand disclosure requirements for campaign ads when generative AI is used. With elections on the horizon, she is focused on mitigating risks.

U.S. Rep. Clarke’s bill is just the latest in a string of legislation that she has authored prioritizing protections and inclusion. She and Senator Ron Wyden (OR) put forth a proposal to hold AI companies accountable for bias in their systems back in April 2019. And when Google
GOOG
fired lead engineer Timnit Gebru in the fall of 2020 for documenting ways unchecked AI models and algorithms can spread racism, the congresswoman sprang into action with a letter to the company’s CEO asking for answers. Senator Wyden and Members of Congress across both chambers were cosigners of the Congressional inquiry.

Should Washington Court Big Tech?

Congresswoman Clarke is among a small group of Capitol Hill legislators with expertise in AI, as Congress and the White House scramble to get up to speed in order to develop a regulatory framework. For example,

Inexplicably, fostering equity and inclusion, in addition to eliminating bias and protecting privacy, seems absent from topline goals for these federal engagements.

Yes, Washington officials need to gain proficiency in this area. However, they should also hear from respected and renowned experts like Gebru as they turn to industry for guidance on important topics like risk mitigation.

Interestingly, the White House opened its doors this past spring to the CEOs of Google, which fired Gebru almost three years ago for exposing the risks central to the AI debate, and OpenAI, which the Federal Trade Commission is now investigating.

Hidden Figures In Washington’s Blind Spot

In a Q&A with the Washington Post a few weeks ago, Representative Clarke was asked about industry engagement in policymaking, and she stated, “I think it’s great that they’ve offered their insights. It’s important for us to examine it and to look at whether we believe there needs to be enhancements or there are some elements of what they’re proposing that makes sense. I don’t think we take it hook, line and sinker.”

Thoughtful consideration and deliberation are vital to rulemaking, but Washington officials must also create space for the voices of the invisible figures in their blind spot in order to craft and advance an equitable and comprehensive framework. Falling back on their sacred go-to, exclusive, high-profile audiences with CEOs and senior executives, is no longer going to cut it in this new era where equity and inclusion are no longer after-thoughts for the public.

To provide context around the dichotomy between AI and blockchain and flush out the capacity for inclusion, I took the matter to a seasoned guru, Anne T. Griffin, who has advised companies on how to navigate machine learning, AI, and Web3 in product development. She uses her engineering degree to help teams build culturally inclusive products. Additionally, she has lectured at prominent universities across North America, including Columbia University and the Military Academy at West Point.

Here are some of the insights she shared with me.

Q&A With AI And Blockchain Expert Anne T. Griffin

The White House just announced the Biden administration has secured voluntary commitments from seven companies to tackle risks posed by artificial intelligence. But consumer protection and risk mitigation measures are not the same as measures to foster financial inclusion in the nearly $100 billion AI market. What are your thoughts?

The thing that stands out to me about this is that these are voluntary commitments. Big Tech continues to say it can and will govern itself and doesn’t need more policy or regulation. But after a decade of these promises, we see very few examples of this industry choosing to do the right thing without the “threat” of regulation.

Plus, in addition to consumer protection and risk mitigation measures, policies specifically focused on financial inclusion with AI need to be put in place. And these policies need to address both preventing these systems from harming people and consequences for causing harm.

Given all the hype about the synergy between Web3 and AI, is financial inclusion still possible at that intersection?

This is tricky. The financial data used to train existing AI models has historic financial exclusion baked into it. You could build an AI model only based on data in crypto markets, but crypto data involves a lot more trading and volatility than what the average person needs or wants in their personal financial life. Financial inclusion is still possible but will require companies, organizations, and (government institutions) to build thoughtful and intentional products trained on data that have removed as much bias as possible and understand the incentives of building for everyone at scale.

In 2020, Timnit Gebru raised concerns about racial bias in AI, and Google fired her. Today, her warnings are being hotly debated since generative IA is all the rave. Do you think her warnings were ignored?

Black women and other women of color, such as Timnit Gebru, Safiya Noble, and Ruha Benjamin, had been calling out racial bias in AI long before 2020. But many initiatives by large companies to combat racial bias in their algorithms had unclear real world impact or were performative. With proprietary, “black box” algorithms, it can be difficult sometimes to prove an algorithm is biased in a way that has legal ramifications. And there are few meaningful consequences to companies for causing harm with their algorithm-powered products. The reality is many companies do not see racial bias in their products as a problem of real consequence right now. Washington officials have extended the welcome mat to large AI companies and are courting wealthy executives for guidance on policy.

Do you worry that a lack of inclusion in policymaking and rulemaking may lead to greater inequities?

We continued to see these inequities widen because of the lack of inclusion in policymaking. Lawmakers are worried about AI being just a privacy risk but don’t ask questions about how AI is impacting financial, employment, housing, and health outcomes. To borrow the title phrasing from Virginia Eubank’s book on algorithms and inequity, in many ways, AI has been automating inequality. In fact, I’ve heard very little from policymakers about concerns around AI’s impact on vulnerable and underserved communities. A lot of them struggle to understand how their own personal data are used by Facebook or TikTok, and could be asking better questions if they engaged the right experts from vulnerable communities to inform them.

What is the ideal marriage between blockchain and AI to help foster equity?

I’d like to offer two thoughts.

One is giving users more control over their data and ways to benefit from its monetization. There are a number of early companies working on a more decentralized web that gives users more control over their data and the ability to monetize this data when possible.

The second is reducing information asymmetry around financial products. Blockchains, as public ledgers, greatly reduce information asymmetry, however, because of lucrative existing business models, as well as privacy laws such as GDPR, many large companies don’t want to work with public blockchains. The right product, with the right incentives both for individuals and institutions, could change this.

source