Can We Trust A.I. To Be As Good As We Need It To Be?

Man playing chess against a robot

It doesn’t take much to lose trust in a financial planning relationship. I’ve seen it happen in real time when an analysis uses the wrong name in a presentation (not good) or relies on inaccurate spending data (very bad). Depending on the severity of these errors, they can call into question what bigger mistakes exist in the overall analysis. While we take care to double and triple check our work, the fact is that humans are prone to make mistakes. And while that is not ideal, solely relying on computers to do the work presents far greater problems, in my opinion.

There’s a lot of buzz about artificial intelligence (A.I.) in finance and it seems to be the next big thing you need to get behind or you will be left behind. As with all things on trend, I’m initially skeptical. My biggest issue is how A.I. learns and who/what it is learning from. The machine learning that informs A.I. is dependent on an accurate and relevant dataset. And a lot of machine learning is generated by public datasets that are susceptible to inaccuracies and biases of people populating that data. Add to that the reversal of the technology industry’s previous plea for government regulation to assure safety and security in A.I. systems to allow for bigger, faster growth and my trust in the system starts to wane.1 And don’t get me started on the continued fight of content providers to protect copyrighted material from being used to train A.I.

If you can get beyond the possibility that A.I. datasets may be flawed, then you need to consider the learning systems and algorithms that model training using these datasets. These can be supervised with classifications and regressions provided by a human data modeler. Or they can be unsupervised to learn through patterns and other similar structures within the data itself. It may be true that large enough datasets will incorporate outliers and eliminate human bias, thus making machine learning a better or truer system of learning. But none of this happens without human involvement, at least at the beginning.

However, companies that design and train these machine learning systems can be less than transparent in how they are building these systems, which can limit how researchers understand the process and outputs. 2 A.I. companies are now asking for permission to “open source” the code, which sounds good in theory.2 Being able to see the code should answer questions regarding the learning systems. However, there are concerns about the security risks of allowing this information to be copied or modified, particularly when bias may be programmed into the model.

I’m generally not a fan of black box systems with an output that I can’t independently verify or make sense of myself, if only to be able to explain results to my clients. It is why I try and be relentless in looking at different types of reports regarding the same analysis for greater context. It is also why I sometimes mourn the demise of WordPerfect (iykyk). I’m not the only person who is skeptical of A.I.’s real impact in the world. In her opinion piece for the New York Times, Tressie McMillan Cottom concludes, “Most of [A.I.] settle for what anyone with a lick of critical thinking could have said they were good for. They make modest augmentations to existing processes. Some of them create more work. Very few of them reduce busy work.” 3

So, where does that leave me in the new age of A.I.? Currently, I don’t have a need that A.I. can fulfill, although I am sure there are some efficiencies to explore at some point. I am interested in seeing how other advisors use A.I. in their practice and creating a successful use case for our process. If there is a service with verifiable and meaningful data and documentation on the data modeling, along with security assurances, then I may wade in. Until then, I’ll survive without the “wisdom of the crowds.”


1https://www.nytimes.com/2025/03/24/technology/trump-ai-regulation.html

2https://www.scientificamerican.com/article/how-ai-knows-things-no-one-told-it/ By George Musser

3https://www.nytimes.com/2025/03/29/opinion/ai-tech-innovation.html

West Financial Services, Inc. (“WFS”) offers investment advisory services and is registered with the U.S. Securities and Exchange Commission (“SEC”). SEC registration does not constitute an endorsement of the firm by the SEC nor does it indicate that the firm has attained a particular level of skill or ability. You should carefully read and review all information provided by WFS, including Form ADV Part 1A, Part 2A brochure and all supplements, and Form CRS.

Certain information contained herein was derived from third party sources, as indicated, and has not been independently verified. While the information presented herein is believed to be reliable, no representation or warranty is made concerning the accuracy of any information presented. Where such sources include opinions and projections, such opinions and projections should be ascribed only to the applicable third party source and not to WFS

This information is intended to be educational in nature, and not as a recommendation of any particular strategy, approach, product, security, or concept. These materials are not intended as any form of substitute for individualized investment advice. The discussion is general in nature, and therefore not intended to recommend or endorse any asset class, security, or technical aspect of any security for the purpose of allowing a reader to use the approach on their own. You should not treat these materials as advice in relation to legal, taxation, or investment matters. Before participating in any investment program or making any investment, clients as well as all other readers are encouraged to consult with their own professional advisers, including investment advisers and tax advisers.