Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More


Explainability is not a technology issue — it is a human issue. 

Therefore, it is incumbent on humans to be able to explain and understand how AI models come to the inferences that they do, said Madhu Narasimhan, EVP and head of innovation, strategy, digital and innovation at Wells Fargo. 

“That’s a key part of why explainable AI becomes so important,” she emphasized to the audience during a fireside chat at today’s VentureBeat Transform 2023 event. 

Narasimhan explained to the crowd and moderator Jana Eggers, cofounder and CEO of synaptic intelligence platform Nara Logics, that Wells Fargo did a “tremendous amount” of post hoc testing on its Fargo virtual assistant to understand why the model was interpreting language the way that it was.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Register Now

In building out models, the company concurrently builds out explainability and has an independent group of data scientists who separately validate them. 

>>Follow all our VentureBeat Transform 2023 coverage