- CHOSEN PROJECT:
Fraud detection for Zelle transfers: This is actually a real life and current problem. We can build on general fraud detection approaches and add Zelle specific nuances. I don’t completely comprehend the methodology used by the fraudsters, but assume it’s some variation of phishing. That means we would have to analyze a set of interactions between ‘phisher’ and ‘phishee’ to label this as fraud or not. It is a current problem, which means current fraud catching systems are inadequate, so scope for innovation. I assume the impact is high not just monetarily but also customer experience and churn.
- Four points for the submittal:
- Confirm the challenge
- Identify the stakeholders
- Understand the challenges of implementing the solution to our scenario
- How do we know the solution is working?
Points 2-4 discussed:
Point 2 – Stakeholders identified are the customers who use Zelle, the operations team that works on the program, the digital team, data science team, data engineering team, legal, and branding. [can we change branding to Marketing?]
Point 3 – Following challenges discussed:
- Ensuring the AI program implemented has a high success rate in identifying which Zelle transfers are fraudulent or not. Having the AI program peg a fraudulent transaction as ‘not fraud’ would be very bad and would open the company to lawsuits.
- However, a question brought up is how definitively do we want to say something is fraudulent or not. Instead, as has been discussed, letting customers know through pop-ups or flagging that the transaction looks suspect. Giving them pause to think and decide if they want to go through with the transaction. If a customer is part of a fraud scheme, the organization can claim they went above and beyond to notify them the transaction looked suspect from the beginning. In other words, we gave customers a choice.
- Giving customers the option to opt in to letting the company use their transactions as data to use as ‘fuel’ for the AI program.
- If we don’t have enough data in-house, one additional source of data could come from third party companies that collect and sell data to our competitors; would be willing to sell to us as well.
- Implementing the program in a way that customers believe in the program. One example is what happens if a transaction is flagged, the customer doesn’t go through with it, and then it turns out the transaction was legitimate and the customer missed a payment. Could be a source of complaints. Customers could say we are flagging too much.
- To respond to these complaints, the organization can highlight the successes from this program. The organization can also be transparent and explain to customers why we are flagging certain transactions.
- Another potential challenge is the likelihood of increased transaction time, especially if we have a human review of a flagged transaction. Having a team monitor the Zelle transactions and stopping some if needed as well as using alerts but without making the customer feel there are too many roadblocks.
- As for pegging the transaction, we could flag it as “Potentially Suspect.” What could be key here is what we communicate to the customer along with that. We could send a message with questions for the customer to consider before proceeding with the transaction, such as “Have you verified the identity of this recipient?” “Have you considered other means of transferring this money?” One of the tools of fraudsters is to build a sense of urgency. This would slow down the process and give the customer time to think critically before proceeding. (One thought is to have this stop gap for all transactions.) This would address the challenge of public perception on how banks are treating these transactions. The public believes that the banks do not care and are not doing enough. This would show we are taking extra very visible steps to protect our customers. As such, this could be used as a pitch to get more customers to allow us to use their data to prevent fraud.
Point 4 – How to ensure the plan is working
- Key metric that shows success is seeing transaction times trend down again assuming our AI has weeded out the fraud.
- Another measure of success is seeing a decrease in customer fraud complaints.
- Publishing customer support contact information or a hotline where customers can report incidents could help with tracking this data. Could also send out surveys to customers to see what their experiences are like.
- One side benefit of fraud going down is that the organization gains a reputation and less fraudsters will make attempts on our customers.
- On the other hand, might make the organization a target for more experienced scammers.
- An example provided is companies that use Salesforce. Some will draft agreements that Salesforce cannot publicly state they are doing business with Company X. Hackers go where the money is. If they know Salesforce is doing business with Company X, hackers will attempt to use Salesforce as a backdoor.
- Therefore, a solution is to keep business relationships secret. Only our business partners would need to know we are the gold standard for secure transactions.
Module 2 Submittal
- Make sure that the word count in your response is within 10% of the
specified word count.
Point 1 (max 100 words):: Confirm challenge
Our challenge is to detect and eliminate fraudulent Zelle transactions on our network as a bank operations officer. This is a real world problem that can be addressed using an artificial intelligence solution such as machine learning (“ML”). Absorbing a high number of transactions to discern which may be fraudulent is the ideal task for ML. It’s “learning” would be invaluable to a bank operations manager as they create a program to detect and eliminate the problematic transactions. (78 words)
Point 2 (50 words or fewer): List stakeholders involved
The internal stakeholders in this challenge would be the company owners, the operations, digital, data science, and engineering teams that work on the product, and the legal and public relations branding departments. External stakeholders include the customers who use Zelle, government agencies like the FBI, and financial institutions.
Point 3 (360-400 words): Breadth of challenge
A nimble IT team would be created to include operations and the digital, data science, and data engineering teams. They would find the proper AI solution and create a timetable to launch. This team will have to obtain training data and learn how to integrate the AI into our systems. Data Science, Operations, and Legal will need to figure out how to handle false positives and negatives (AI saying not-fraud when it’s fraud and vice versa). They also will monitor the AI post launch.
Legal will mitigate risks associated with using an AI solution by partnering with IT to negotiate and finalize any agreement with a supplier. Data usage rights need to be understood so we can obtain and use training data, current and future customer data, and rights to any improvements upon the AI if not reserved by the AI supplier or not created in-house. Legal will also draft or revise end user license agreements. If training data is difficult to obtain, IT may have to source it from third parties and work with Legal to obtain it. Legal will also mitigate any risks from a regulatory perspective.
Public Relations will create a campaign for this program, explaining why we are using AI to solve this problem. The campaign would include gathering transaction data to help train and improve the AI, explaining how the solution would work, and using success stories. PR would partner with Legal to draft language notifying the customer of potentially fraudulent transactions. The system would alert the customer to a problem transaction and provide them information to help them evaluate if they wish to proceed or terminate. This could include asking the customer if they are sure of the recipient’s identity or to confirm that this was not initiated in error. If the transaction must be stopped, a process would be implemented to communicate directly with the customer with options to speak with a human representative and to be provided with the reasons why the transaction was stopped.
A potential problem with this speed bump is increased transaction time for our customers. This may be the result of the messages sent to customers or human review of the AI results. Marketing should develop proper feedback questions to gauge customer satisfaction. PR and IT would then collaborate on any adjustments to the messaging. More human review may be necessary to fine tune the system.
Point 4 (135-150 words): How do we know it’s working?
Two key metrics that will be measured are the transaction times after implementing the program and the number of fraud incidents. For the former, we wouldn’t want the customer to feel a transaction is taking too long if we employ features such as asking them to verify a recipient’s identity. For the latter, one way to monitor potentially fraudulent activity is if an account user is redirected to a page different than the homepage after logging in. Also, tracking how many users stop using Zelle is a good indicator if the project is being successful or not. Furthermore, an agreement could be drafted with businesses that use our program to ensure our ties are kept confidential. For example, several companies that use Salesforce have an agreement that the organization cannot publicly disclose the businesses they work with. This ensures hackers don’t use Salesforce as a backdoor.
- How can a leader or multiple leaders help with the challenge? (Max 100 words, excluding citations)
The Zelle AI leaders can engage internal and external entities that have access to customer data and request statistics from them which would be used to feed the AI and promote machine learning. The legal, public relations, political, and judicial leaders could form a xTEAM to spearhead drafting legal provisions on privacy issues (Cirqueira et al., 2021). Zelle leaders can offer the AI team full access to required data to support machine learning. The PR leader can handle clients who willingly join the program to secure the Zelle platform and transactions (Pathak & Mande, 2019). These leaders must also involve senior executives early in the process for buy-in.
- Can AI be used to support the leader(s) in their efforts? If yes, how? (max 100 words, excluding citations)
AI can support xTEAM leaders because Zelle transactions are repeatable, nontrivial, and have a measurable business impact (Taylor, 2012). The AI would be designed to flag fraudulent transactions, making it easier for the team to overcome the Zelle challenge (Alhaddad, 2018). The AI is the tool that makes it possible to separate legitimate transactions and flag those that seem fraudulent. With AI, leaders can show how effectively the tool protects clients from potential fraudsters (Dhieb et al., 2020). The leaders depend on AI technology to identify loopholes and help create a robust authentication system to avoid payment delays and stop fraud. AI is pivotal to leaders’ success.
Alhaddad, M. M. (2018). Artificial Intelligence in Banking Industry: A Review on Fraud Detection, Credit Management, and Document Processing. ResearchBerg Review of Science and Technology, 2(3), 25-46.
Cirqueira, D., Helfert, M., & Bezbradica, M. (2021, July). Towards design principles for user-centric explainable AI in fraud detection. In International Conference on Human-Computer Interaction (pp. 21-40). Springer, Cham.
Dhieb, N., Ghazzai, H., & Besbes, H. (2020). A Secure AI-Driven Architecture for Automated Insurance Systems: Fraud Detection and Risk Measurement. IEEE Access, 8, 58546–58558. https://doi.org/10.1109/access.2020.2983300
Pathak, J., & Mande, V. (2019). Organizational Risk, Fraud, Forensics, Anti Money Laundering Laws and Controls, and Corporate Corruption. Emerald Publishing Limited.
Taylor, J. (2012). Decision Management Systems: A Practical Guide to Using Business Rules and Predictive Analytics. Boston: Pearson Education, Inc.
- Given the answer to the previous question, what specific tool(s) can help and how? Reflect on the people, process, and technology:
- Who are the people needed and what are their skills? (100 words or fewer)
Data Ingestion Engineer – Data ingestion from various operational systems into ad-hoc storage.
Data Engineer – Perform ETL (extract, transform and load) of multiple data sources to be used for AI modeling.
Data Scientist – Performs feature engineering and building AI model.
Software Engineers – Deploy the AI model and integrate with operational systems.
QA Analyst – Ensure quality of data at every measurable point including performance statistics of the model in trial.
Customer Experience Analyst – Will design customer interaction with the system.
Project Managers – Manage resources and timelines.
Business Managers – Periodic readouts to stakeholders including budget, risks.
- What process(es) do they need to redesign or change? (100 words or fewer)
= The Zelle transaction process will need to be redesigned in at least the following ways with input from the Customer Experience Analyst: customer notification when fraud is detected, additional transaction verification steps, or liability waivers if customers decide to force the transaction. Internally, process updates and additional training will be imparted to customer care agents who will be handling customer calls on suspected fraudulent transactions. The agents will be trained to interpret the output of the AI (likely a probability with potential root cause), translate, and present it back to the customer.
- What technology could be helpful? (100 words or fewer)
= Enterprise Warehousing on the Cloud can be used to store the data. Data manipulation technologies (SQL, PySpark, etc.) can be used for ETL work. Python, Jupyter Notebooks help building the ML model. Vendors like H2O.ai provide automated tools to interpret ML models that could be helpful. Software Engineers can use MLOps, MLFlow to deploy the model. API technologies will help with integration into operational systems. QA analysts could use automated QC tools (eg. Anomalo) and/or Business Intelligence tools (eg. Tableau) for data visualization and QC. Lastly, Scaled Agile tools like JiraAlign can be used for project management.
1. To successfully implement your group’s chosen tool(s), what might individual leaders have to do? (max 200 words, excluding citations) a. For example, do they have to do more sensemaking?
2. How might someone’s leadership signature influence their choices? (max 150 words, excluding citations).THE LEADERSHIP SIGNATURE IS FROM THE IMAGES BELOW. PLEASE CHOOSE FROM IT. YOU CAN ALSO WRITE ON MULTIPLE IMAGES.
3. What would teams have to do? (max 200 words, excluding citations)
4. What can leaders and/or teams do to make sure AI is accepted as it provides the tool to perform tasks differently and better? (max 150 words, excluding citations)
All papers are written by ENL (US, UK, AUSTRALIA) writers with vast experience in the field. We perform a quality assessment on all orders before submitting them.
We provide plagiarism reports for all our custom written papers. All papers are written from scratch.
Contact us anytime, any day, via any means if you need any help. You can use the Live Chat, email, or our provided phone number anytime.
Get your money back if your paper is not delivered on time or if your instructions are not followed.