On the last day of the Applied AI for Distributors conference, cybersecurity expert Theresa Payton’s dynamic keynote, “How AI, Deepfakes and ChatGPT Are Transforming Cybercrime – and What to Do About It,” left attendees with this message:
“Everything starts with the human user story.”
Payton was the first female White House CIO, is the CEO of Fortalice® Solutions, author of the award-winning book “Manipulated,” and has many accolades and awards as one of the country’s preeminent authorities on secured digital transformation. Here are some takeaways from her session:
Collect human user stories.
Payton shared a story from her time at the White House to illustrate how important the idea that “everything starts with the human user story” is. The goal is to design security and solutions with the end-user in mind.
She realized employees weren’t reporting their smartphones lost or missing or telling her team when they were traveling internationally — a violation of protocol and direct mandates. When they connected to dangerous cell towers across the world, she’d have to break their phones, creating a negative experience for the user and for her team’s reputation.
So, she investigated.
After speaking with users to get their story, she received this feedback:
- “First of all, your briefings are long and boring.”
- The acceptance policy on the document users signed about losing their smartphones said: “I understand if this is lost, damaged or stolen, this could be punishable to the fullest extent of the law.”
- Because they didn’t want a black mark on their records, calling her office to report the missing device was their last resort. Users did everything they could think of to avoid it.
Payton took this feedback to her team and asked: “What are the two things we want them to do?”
- Call before they go on international travel.
- Alert her team immediately when devices are lost.
How could they encourage users to do what they needed them to do?
They piloted the concept of a White House “Happy Meal”: a one-gallon Ziplock baggie, smartphone, a card with a number to call, presidential-branded M&Ms, jellybeans and other useful items from the supply closet.
Now, when her team delivered a briefing, they’d hand the bag to the user, encourage them to eat the chocolate and candy, and said:
- “Put this card in your wallet. When you go on international travel for work or for fun, call this number. This is the 24/7 number for my team. Call us at least four hours before you depart, and we’ll tell you what to do.”
- “If you lose your device, call the same number.” Because her team built the proprietary software, they could track smartphones and kill the devices if necessary.
Ultimately, the pilot worked. Instead of reporting a lost device after a day or more, users called within an hour. As a result, the overall security of White House operations improved.
Criminals can teach a master class in understanding the human user story. You should, too.
Before introducing technology such as AI, you need to understand the human story. When you don’t, you’re vulnerable to criminals that do. Understand:
- Why is the person using the technology?
- What were they doing before they started using that technology?
- Is the technology actually helping them? Or is it making things worse?
- What are they doing instead of using technology?
Payton has learned no matter how well professionals do their job, criminals can breach narrow spaces if they know the human user story better than you. They know exactly how we do things, which is why:
- They can commit wire transfer fraud.
- They can steal people’s passwords.
Focusing on the human user story, which you already know how to do, will better prepare you to defend yourself.
How we spend every minute generates data and transactions that must be secured.
Payton shared an infographic from Domo about how your employees, customers and third-party suppliers spend their time each day. Understanding this is critical.
As predictive and generative AI rise in use, cybercrime increases.
Eighty-five percent of cybersecurity professionals attribute the acceleration of cybercrime in the past 12 months to predictive AI and generative AI, according to Security Magazine & Deep Instinct’s Research.
Payton said the two most common cybercrime techniques are:
- Social engineering, using generated voice or video likenesses of others to get confidential or personal information
- Password credential reuse
These attacks are sophisticated. Three questions to ask:
- Do you have a policy for employees and contractors regarding how to treat client data and other proprietary information?
- Have you asked generative AI about your company and your executives?
- Do you have a policy if you find inaccuracies, personal information or client information within generative AI?
Payton’s five-step framework for governance
Payton worked with a CIO of a global insurance company. The company’s chatbot was outperforming the company’s most seasoned customer service agents. After asking some questions, Payton found out the engineers who programmed the chatbot were from a third-party vendor. She asked the CIO about the maker-checker role.
The maker-checker role ensures that the account owner is actually the one requesting the action – opening up an account, withdrawing, moving money, etc. The “maker” is responsible for initiating transactions, while the “checker” verifies and approves them.
The CIO responded to her: “That’s a great question. We’re going to look into it.”
Payton asked a simple question. One she knew to ask because she understood the human user story. Without putting hands on a keyboard, she found a vulnerability.
Her 5-step framework is:
- Understand the Human User Story: Document customer-centric and employee-centric stories.
- Establish a Safe AI Team: Leverage an existing council or set up a new one comprised of a line of business executives, technology, risk, legal, customer service, marketing and security (add other roles as needed).
- Pilot-Test-Learn: Ensure all AI implementations go through a pilot phase that tests resiliency, reliability, privacy, security and efficacy.
- Trust But Verify
- Deploy
Technology and AI aren’t bad. It’s the misuse of technology by bad people that makes it bad.
Payton demonstrated deepfake technology to show how the economics work in favor of criminals. They only have to have a little knowledge, very little computing power and access to free tools to create something very convincing, especially for very busy people.
In her presentation, she shared a deepfake video and audio of herself. To create them, Payton used all free tools, a standard-power computer and completed only one of each without refinement (for security). Even so, it wasn’t hard to understand how more powerful tools – in the wrong hands – could make the deepfakes seem plausible. In fact, when she shared an audio clip with her team, most couldn’t tell the deepfake wasn’t her voice.
There are good applications of deepfake technology – such as for training. But it’s very easy to see the potential downside. Payton recommended creating a deepfake passphrase that is not easily guessed by looking at public data available about you.
Because deepfake technology is widely available, your loved ones could fall prey to claims that, for example, you’re kidnapped and that they need to wire money to the criminals. Create a passphrase someone can ask for if they receive a call using your voice.
What Payton sees coming in 2025
Every year, Payton forecasts what’s next in cybersecurity. Here’s what she expects:
Bots will betray us: One of the platforms that use bots will be compromised.
- How to prepare: Create a playbook that covers …
- What will I do if one of those platforms I use is compromised?
- How will I know?
- How do I respond?
- What data am I allowed to have?
- How am I either anonymizing that customer data or how am I tokenizing that data so that if that compromise happens in 2025, we’ll be OK?
Criminals will be able to reproduce biometrics.
- How to prepare:
- Is there something else that doesn’t just rely on user ID and password and biometric?
- Can we do behavioral-based analytics? For example, a trigger when there is an anomaly.
Synthetic identity fraud and corporate espionage will increase: It won’t be easy to discern which digital assistant is legit and which is not.
- How to prepare:
- If you’re using digital assistants, who are you using and how are you vetting them?
- How are they protecting your information?
What to do next
If this all feels like too much, start here:
- Ask your channel partners for a self-attestation that they have a policy around predictive and generative AI technologies. If they don’t have this, it will start the conversation and initiate a process.
- Make sure your roadmap is resilient, because the infrastructure is fragile: If AI tools fail, like when ChatGPT went down recently, what’s your backup?
- Who is doing your “trust but verify” on your chatbots (whether that’s your company, your customers or the third-party vendors)?
- Join the free service from the FBI, InfraGard.
Check out our takeaways from Days 1 and 2: