SEC Chief Warns of Inevitable AI-Driven Financial Crisis sans Regulation.
SEC Chair Warns of Impending Financial Crisis Caused by AI
According to Gary Gensler, the Chair of the United States Securities and Exchange Commission (SEC), a financial crisis fueled by the widespread use of artificial intelligence (AI) is “nearly unavoidable” unless regulators take swift action.
In an interview with the Financial Times, Gensler emphasized the urgent need for regulators to address the risks posed by the concentration of power in AI platforms, which could jeopardize financial stability.
While Wall Street Banks have been experimenting with AI for fraud detection and market surveillance, they have recently expanded its use to include account opening processes and brokerage apps.Gensler expressed concern that if multiple institutions rely on the same data models, it could lead to herd mentality and destabilize financial markets, potentially triggering a recession.“In the future, we will likely experience a financial crisis, and people will realize that we relied too heavily on a single data aggregator or model,” Gensler warned.
He further added, “Perhaps it will occur in the mortgage market or some sector of the equity market.”
Gensler predicts that an AI-triggered financial crisis could occur as early as the late 2020s or early 2030s. The previous major crisis was the global financial crisis (GFC) from mid-2007 to early 2009, which caused severe economic turmoil comparable to the Great Depression of 1929.
Experts attribute the GFC to a combination of factors, including the downturn in the American housing market, reckless risk-taking by investors and financial institutions, and regulatory and policy errors. The crisis spread globally through interconnectedness in the financial system.Challenges in Regulating AI
While Gensler acknowledges the need for increased regulation around AI, he recognizes the difficulty in shaping effective regulations.“Addressing AI’s impact on financial stability is challenging because our current regulations focus on individual institutions, banks, money market funds, and brokers. It’s not designed for the horizontal nature of AI, where multiple institutions rely on the same underlying models and data aggregators,” explained Gensler.
The SEC has already proposed “conflict of interest” rules to prevent investment firms from prioritizing their interests over investors when using predictive data analytics (PDA) or similar AI technologies. However, the proposal has faced criticism for potentially hindering innovation and AI adoption.Gensler is not the only regulator expressing concerns about the potential harm caused by AI. The Federal Trade Commission has initiated a review of Open AI, the creators of ChatGPT, due to worries about consumer harm and data security.In September, Senate Majority Leader Chuck Schumer hosted closed-door listening sessions with tech leaders, including Elon Musk and Satya Nadella, to discuss AI regulation. However, critics argue that these discussions have yielded little progress in developing effective legislation.While the challenges of regulating AI persist, it is crucial to address the risks associated with its widespread use to prevent a future financial crisis.Lack of Progress in AI Regulation Forum
Attendees of the September AI forum hosted by Senator Schumer expressed disappointment with the lack of substantial progress in developing regulations. Limited time for meaningful discussion and questions left many feeling that the forum was merely a show without real advancements.“In terms of regulatory suggestions, I didn’t hear much,” said Senator John Kennedy. Senator Josh Hawley also questioned Schumer’s commitment to finding solutions, comparing it to the lack of action on antitrust issues in recent years.
The forum took place shortly after a Senate Judiciary subcommittee hearing that highlighted the negative impact of government inaction and weak regulations on AI development, which disproportionately benefits major tech corporations at the expense of American citizens.
Andrew Thornebrooke contributed to this report.
How does the lack of transparency and explainability in AI systems pose challenges for regulators in ensuring compliance with existing regulations?
Erns about the risks posed by AI in the financial sector. The Bank for International Settlements (BIS) has also warned about the potential for AI to amplify existing risks and create new ones in the financial system. In a recent report, the BIS highlighted the need for regulators to develop a framework that addresses the challenges posed by AI in areas such as data privacy, algorithmic fairness, and systemic risks.
One of the main challenges in regulating AI is the lack of transparency and explainability. AI systems often make decisions based on complex algorithms that are difficult to interpret. This opacity can make it challenging for regulators to determine if AI systems are being used in a way that complies with existing regulations. Additionally, the use of AI in financial markets can also give rise to market manipulation and insider trading, further complicating the regulatory landscape.
To address these challenges, Gensler suggests that regulators should consider implementing a framework that promotes transparency and accountability in AI
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."
Now loading...