{"id":1957338,"date":"2023-02-14T09:05:51","date_gmt":"2023-02-14T14:05:51","guid":{"rendered":"https:\/\/wordpress-1016567-4521551.cloudwaysapps.com\/plato-data\/part-three-ai-security-can-make-or-break-a-financial-institution-michael-boukadakis\/"},"modified":"2023-02-14T09:05:51","modified_gmt":"2023-02-14T14:05:51","slug":"part-three-ai-security-can-make-or-break-a-financial-institution-michael-boukadakis","status":"publish","type":"station","link":"https:\/\/platodata.io\/plato-data\/part-three-ai-security-can-make-or-break-a-financial-institution-michael-boukadakis\/","title":{"rendered":"Part Three: AI Security Can Make or Break a Financial Institution (Michael Boukadakis)"},"content":{"rendered":"
\u201cIn order to fully realize the potential of AI, we have to mitigate its risks,\u201d the White House Office of Science & Technology Policy recently NIST, formally known as the U.S. Department of Commerce\u2019s National Institute of Standards and Technology, released its Framework on January 26th to help innovators manage the many risks of artificial intelligence technology, which is trained through data about things like human behavior.<\/p>\n In the context of retail banking customer service, that behavioral data must be combined with a user\u2019s account details and personally identifiable information (PII) in order for AI to create the personalized interactions that elevate a financial institution\u2019s customer experience (CX). Thus, collection and use of the data inputs necessary for AI technology to do its job in financial services must be safeguarded to the greatest degree.<\/p>\n Security is the last of the four pillars explored in this series, which together support the transformed and evolving customer experience that bank and credit union leaders should expect from their investments in AI:<\/p>\n Four in five senior banking executives agree<\/a> that unlocking value from artificial intelligence will distinguish outperformers from underperformers. However, <\/b>privacy and security concerns were identified by bankers, in the latest The Economist Intelligence Unit Survey<\/a>, as the most prominent barrier to adopting and incorporating AI technologies in their organization. <\/p>\n Seasoned, rules-based chatbots require less in the way of privacy and protection than AI-backed bots, because they\u2019re limited to answering basic, hours-and-location-type questions. Next-generation bots, such as Bank of America\u2019s Erica (who has helped While it may seem that chatbots are vulnerable to surreptitious attacks, they actually provide stronger security than human agents handling service requests. During these AI-driven self-service interactions, data moves between the user, the bot and the backend systems, eliminating human touchpoints, which reduces the likelihood of process breakdowns and sensitive information being compromised by accident or, unfortunately, on purpose. Additionally, smartly designed IVAs do a better job than humans at thwarting attacks by fraudsters impersonating accountholders. In fact, they\u2019re highly trained experts in the detection of suspicious activity through the recognition of patterns and anomalies. AI automation of fraud controls increases the overall security of consumer banking.<\/p>\n As powerful as artificial intelligence can be as a competitive advantage in banking, lack of strong security measures is a nonstarter. Without them, bank and credit union leaders jeopardize their customers\u2019 and members\u2019 money, privacy and loyalty, as well as their financial institution\u2019s assets, resources and reputation. To stay competitive, FIs need to give customers and members peace of mind that their data and money are fully protected.<\/p>\n It’s important to understand that NIST\u2019s Framework is a voluntary<\/b> guidance document for organizations designing, developing, deploying, or using AI systems. Not all AI systems are built responsibly. When choosing AI-fueled CX solutions, make certain that the technology\u2019s security standards are up to par for use in the financial services industry.<\/p>\n If done wrong, AI technology can put a financial institution\u2019s future at risk. If done right, AI-powered CX solutions\u2014which improve customer satisfaction and loyalty\u2014can solidify a financial institution\u2019s role as a future market leader.<\/p>\n
\ntweeted<\/a>. \u201cThat\u2019s why we\u2019re excited about @NIST\u2019s release of the AI Risk Management Framework\u2026\u201d<\/p>\n\n
\n
\n32 million<\/a> users), blow self-service wide open. These intelligent virtual assistants (IVAs) can give financial advice and complete commands that require authentication, such as scheduling payments, making transfers, compiling reports and much more. The best part? They get smarter over time through an always-on learning loop that amasses data during every interaction. More accessible, quality data means better-performing AI\u2014but the involvement of all of this sensitive information calls for strict security measures.<\/p>\n\n