Banner Business Sponsored by The Boston Foundation
Artificial Intelligence is definitely a tool for financial empowerment, but not the key to economic growth since the racial wealth gap predates AI, experts explained.
“I don’t think AI is an end-all-be-all and the mothership to Black economic mobility because the systemic issues that Black communities have faced on a general scale have not been rectified,” said Lamar Laing, founder & CEO of Copiafy, a personal financial management platform.
Historically, Black people have been barred from banking institutions subjected to lending biases, and Black neighborhoods faced systemic denial of financial services through a process known as redlining. These are just a few of many discriminatory practices that have inhibited economic mobility.Bottom of Form
Today, that discrimination prevails. Data reveals that mortgage applicants of color are more likely to be denied loans by lenders than their white counterparts despite similar financial standings. Not only that, Black and Hispanic households are five times more likely to be unbanked compared to white households, with mistrust noted as a deterrent from banking. A 2024 analysis by the Federal Reserve Bank of St. Louis also indicated that Black households averaged $311,000 in wealth while white households averaged $1.4 million.
Throwing AI into the mix, financial experts said it could assist in closing the wealth gap. For businesses, it provides ways to streamline menial tasks such as verifying documents and answering customer queries. For consumers, AI can help detect fraudulent activity or make recommendations.
“I think AI is great for operations and productivity — helping you plan things out, helping you with redundant tasks. It definitely has helped us optimize our pipeline from idea to product,” said Laing.
With Copiafy, Laing explained that users can consolidate their financial information into one place. This enables them to keep track of expenses like payment deadlines or credit reports more efficiently.
Though not an AI company, Laing said he plans to leverage AI to help users, emphasizing the need for ethical implementation.
“Our theory is essentially that if we’re going to plug our clients into an AI, we need guardrails around the information that goes in,” he said. “If we’re gonna use AI for anything related to your personal finance, we will do our best to make it hard for the user to accidentally tell the AI things that are personal to their finances because a lot of third-party integrations tend to use your financial data against you — and that’s what we’re trying to prevent.”
Research validates Laing’s concerns. AI systems rely heavily on data to function, often collecting personal information such as location, online activity, health records, or financial details if entered into the system. This data is then used to train AI models, enabling them to create personalized customer experiences and make predictions.
The problem, however, is that this data isn’t always collected with user consent, leading to privacy concerns. This is especially troubling when that data is used to create algorithms or predictions that can result in bias towards users if the data collected isn’t diverse, representative, or thoroughly analyzed for error.
Black people are disproportionately harmed by algorithmic biases such as those in facial recognition, online recruiting, word association, and other tools.
Third-party AI tools are AI algorithms that are outsourced by an organization for use in its own business, though they also pose many risks like “reputational damage and the loss of customer trust, financial losses,” according to the MIT Sloan Management Review.
AI may widen the wealth gap
AI, experts said, also has the potential to exacerbate financial disparities rather than close them.
“It’s irresponsible for companies to allow your financial data, which is attached to all types of other data, to just be streamlined through an entity that you have no control or guardrails over,” said Laing.
His concerns about data privacy reflect a larger issue within the digital age. Alondra Nelson, Harold F. Linder professor of Social Science at the Institute for Advanced Study and former deputy assistant to President Joe Biden, shared similar worries.
“We are a society whose economy is pretty much built on data,” said Nelson. “Do we need data privacy laws? Absolutely.”
During her time at the White House, Nelson spearheaded the blueprint for the AI Bill of Rights, a framework and guide for AI design, use, and deployment in the U.S. The blueprint consisted of five principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback.
“This was the first document about AI I think that would come out of the White House and really reflect the priorities and the values of the Biden-Harris administration,” she said.
The blueprint would become a pillar in President Biden’s AI executive order for “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which operated from October 2023 to January 2025 until it was rescinded by the subsequent administration. The executive order made some impact, with some government agencies pursuing safety agreements and testing different AI models.
To ensure AI is ethically implemented, Nelson emphasized the need for sustained legal regulation.
“I think any of the principles of the AI Bill of Rights embodied into law would be fantastic,” she said. “There’s a real sense that there are things that the public wants and needs to be able to trust and adapt to these systems in their lives.”
For consumers, knowing how to use AI could be pivotal for financial understanding. Mariah Howard, chief innovation officer for the National Black Chamber of Commerce, noted that upskilling may be especially worthwhile for marginalized groups who are disproportionately affected by the digital divide.
“Actually take the time to invest in upskilling and create an avenue for yourself to learn that we are entering a new digital age that will inquire and require us to be participants,” she said. “Research and upskill yourself so that you are technically sound and somewhat proficient [so that] you can speak and understand on a general basis.”
Learning how to work the system may prevent the system from outworking you. A McKinsey & Company study found that Blacks and Latinos have the highest risk of job displacement due to automation and that 132,000 African American jobs could be lost by 2030.
“If reskilling efforts are not undertaken, this trend only stands to worsen,” another McKinsey & Company study stated.
At the Brooklyn Bank, a nonprofit teaching financial literacy to communities of color, they have considered incorporating AI into lessons but echo the aforementioned concerns.
“It makes things a lot easier, and it opens a lot of doors too, freeing up time for people to do other things,” said Jude Bernard, founder of the Brooklyn Bank. “On the harmful side of it, as consumers, you also have to be aware that it’s not a perfect system yet and that it makes mistakes, so you can’t put all your trust into it.”
The common thread is that AI presents both pros and cons in advancing Black economic mobility. While experts acknowledge that AI tools can help streamline financing and banking for historically marginalized groups, they also emphasize the need for vigilance and continuing to learn more about these systems.
“We are still in the very beginning stages of what Is soon to come,” said Howard. “So that is all to say that we have time, and there is no rush to be inaccurate and to make errors and irreversible, immutable mistakes, if you will. It’s really the most critical and pivotal time for us to do extensive higher level deep research.”
The post appeared first on New York Amsterdam News.
Leave a Reply
You must be logged in to post a comment.