The generative AI tools now becoming publicly available are likely to be transformative.
These so-called chatbots use built-in algorithms and billions of data points gathered from across the internet to generate text, video and images. User interactions are via easy-to-use chat functions.
But before you dive in, it’s important to understand: Chatbots are dramatically different from the “one-way” search engines we all use. With a chatbot, every query you enter, any information included in your profile, the data gleaned from your computer equipment (IP address, etc.) is captured stored, and used for training.
Such is the nature of AI: The chatbots are constantly learning, constantly becoming smarter—which can make you, your family or your business more vulnerable to cybercrime.
Thus, it’s critical that you conscientiously limit the information you put into the system, starting with how you set up your chatbot account.
Here are some ways to help protect yourself.
First, understand how chatbots work
A new generation of artificial intelligence (AI) tools is creating global excitement and prompting wide investment. Companies worldwide are embracing these tools to bring efficiencies, and potential products and services that promise to transform industries in ways we cannot yet imagine.
Indeed, you or your children may already be using ChatGPT, Bard, Jasper or another generative AI chatbot tool to help you draft a letter, essay or resume. Or you may be exploring ways to use one of these tools to become more efficient.
But be aware: Chatbots are smarter and collect more information than the search engines and apps you regularly use; for example, to check the weather, order food delivery or hail a ride. With chatbots:
- Every interaction is recorded and saved (even if a chat or query is deleted, a copy exists in a de-identified state)1
- Your user profile information is also captured—IP address, location, phone number, logon data, device information (make, model), browser cookies, network activity, etc.2
- Information about you is extracted and aggregated from social media pages, for example, and from other online sites and services.3
Moreover, as tools designed for use by the public, the information the chatbots hold is widely accessible, unlike the proprietary AI tools many businesses use to gain efficiencies and bring new products and services to market.
With each chat and datapoint entered, a chatbot is better able to make well-informed assumptions, which may help you in the near term. But may also be exploited by cybercriminals seeking to target you or your family members.
Also know: The chatbots now being released to the world are not currently subject to any government regulations.
Take care setting up an account
The conversational nature of generative AI tools often leads people to enter more information than they typically put into a search engine.
Before jumping in, follow these simple precautions:
Anonymize your profile
- Create a new, dedicated email address when you sign up for a chatbot. Avoid using the email account you use for banking, work, social media or other personal services
- Be similarly cautious with the phone number you enter and the other identifiers you provide
- Use a VPN to further anonymize your profile and connection with the chatbot so that your interactions are encrypted. This will also prevent the system from collecting your device’s IP address and location
Choose a service with care
- Use a reputable generative AI service, which can be found in the Apple App and Google stores. Or use a service directly from your browser. Be cautious about using experimental new AI tools
- Create a strong program password to stop others from gaining access to your chat history
- Use two-factor authentication, if offered
Be discreet—and diligent
- Do not disclose sensitive and personal information into the chatbot, such as people’s names, birthdays, tax information, geographical addresses, etc. (If you wouldn’t want to see that information in a newspaper, don’t put it in a chatbot)
- If you need help writing a resume, for example, give the system parameters to use—not personal information
- Opt out of data collection—if you do not want your chats or conversations used for training purposes, go to chatbot Settings, and check the appropriate boxes
- Log -off after each chat—as with an online banking transaction, protect your information by logging off when you finish each chat. This will prevent the system from continuing to learn about you and what else you are doing
- Regularly delete your account’s cookies and history, so the chatbot cannot continue to collect information and learn more about you
Protect yourself from cybercriminals
Public AI tools give hackers and cybercriminals new ways to profile and target you and your family. For example, they can:
- Write better code and malware to break into systems you regularly use
- Generate phishing emails in various languages without telltale signs of forgery, such as bad grammar or misspellings
- Create disinformation or “data-poison” existing tools
Responding to this heightened threat will require you to take extra steps to verify that any and all emails, phone calls and texts you receive are from a valid source. (Always validate the source via a separate channel. Don’t simply hit reply.)
It’s especially important to verify any request that attempts to change a bank account number or other critical information.
Also, make sure you have these basic cybersecurity protections in place:
Use multi-factor authentication wherever it’s offered
- Protect your online banking, email, social media, shopping, airline miles and other accounts by making it significantly harder for someone to access your information and assets
Use strong and unique passwords
- Consider using a reputable password manager with encryption to keep your passwords secure and up-to-date
Continuously run anti-virus software
- Keep protective software running in the background on all your digital and mobile devices to ensure they remain free of malware
Keep device operating systems up-to-date
- Promptly update system software whenever a new release is available to close any loopholes hackers may try to exploit
An emerging threat: Voice deepfakes
With AI, individuals’ voices can be easily captured on social media platforms, in webinars, zoom calls and replicated.
Cybercriminals combine so-called voice deepfakes with a target’s own profile and/or bank information in an ever-growing array of schemes.
In the “grandparent scam,” for example, fraudsters aim to dupe older family members into sending cash by playing a recording of a younger relative purportedly asking for help. One preventative measure is to establish a family safe word. It can help determine if an urgent phone request is in fact coming from a family member in distress.
Also know: J.P. Morgan has tools in place to detect synthetic voices as well as multiple processes and procedures to help mitigate scam attempts. Furthermore, J.P. Morgan uses multiple authenticate measures to verify clients’ identities.
We can help
Your J.P. Morgan team can provide best practice guidance and resources on how to keep yourself, your family and your information cyber secure. Visit our Cyber & Fraud Prevention Hub, for more information.