Cyber Security tricks to slow down fraudulent activity on your website

Drew Jarrett
6 min readAug 12, 2022

In this 5min read I’ll suggest a few Cyber Security 101 tricks to help you spot malicious users creating “fake accounts”. Introducing 3 layers — approx ~8 weeks development time — to slow down fraudulent activity, abusing your platform, costing you $$, and polluting your data.

Day to day I connect with Product Managers and Developers across the globe, working on some pretty cool products / tools / solutions. I’ve noticed a trend — or more like a pain point — shared across the industry. A rise in malicious and fraudulent activity! It’s not a new concept, but attacks are getting smarter and as such — following the “white hat” vs “black hat” battle — we also need to step up.

Why do malicious users want to create fake accounts? Let’s take a Marketing platform as an example. Imagine creating 1000’s of fake accounts as scale, then using each account to send emails + social media posts + publish Ads across the web, seen by 1000s of people before getting detected or reported. Repeat each day. In an attempt to spread misinformation to influence markets, elections, enforce a bias… or just trick someone into sharing personal information.

So what can we do? There is no single solution / door lock. Think of Cyber Security as introducing layers of doors with different locks to slow down malicious activity. Cue Mr Burns’ (Simpsons) Flawed Nuclear Security System :) .

[Source: YouTube — Mr Burns Flawed Nuclear Security System]

I’ll introduce 3 layers of protection, and will also include a development time (I would likely quote) to implement, test and roll something like this into production. This is my view and opinion on what I would do, not advice from any company, or a single set of solutions that will solve all your security problems.

Layer 1: Stop the Bots

First things first, Bots! In short — Bots are computer programs, scripted to act like a human and perform a sequence of actions (and reactions) across the web. Each bot can be programmed for a specific website and a specific task, performing calculated ‘attacks’. A while back I wrote a post on Bot Swatting in data analysis (if you’re interested). The TL;DR here is that as time goes on Bots are getting more and more sophisticated at mimicking human behavior, and as such more difficult to detect.

If you think about it, the sign up and creation funnel a user would click through is the same for everyone, making it easy to automate. Bring in the bots! Our malicious user will use Bot(s) to create as many fake accounts as possible at scale, so they have a better chance of a few slipping past detection.

Mitigation: “I’m not a Robot” detection & two-factor authentication [Dev Time: ~2 Weeks]

I’m sure at some point you’ve had to decode an image of a poorly written word, to be allowed to continue what you’re doing. As annoying as this is — is it me or does it get harder and harder to read those words! — it adds a barrier to entry to remove the predictiveness of completing a task e.g our sign up funnel.

As an example — reCAPTCHA is a script that adds an “I’m not a Robot” detection widget onto your page. It has a Security Preference setting which I would suggest setting at “Most Secure” if you do suffer a lot from fake accounts.

Also introduce two-factor authentication early on, asking the user to verify an email or phone number in order to continue, obviously checking to see if that email or phone number was used in the past. Something a Bot will struggle with.

Layer 2: Compromised account defenses

So far we’ve been talking about fake accounts but let’s not forget to prioritize protecting your real users (i.e the real accounts), or risk a malicious user ‘hijacking’ real accounts. If your website / tool / service is a target for fake accounts, then ‘hijacking’ is a guaranteed possibility.

[Image Source: Safe internet vector created by jcomp — www.freepik.com]

This not only makes fake activity extra difficult to detect, but most importantly risks losing trust with the reason your business exists — your users.

Mitigation: Cross-site scripting attacks [Dev Time: ~3 Weeks — depending on how much you implement]

There are lots of attacks a “hacker” can do in an attempt to exploit vulnerabilities and gain control of a user’s account (Cross-site Scripting, Man-in-the-Middle (MitM) Attacks, Phishing, SQL Injection… if you’re interested in hearing more about these? drop me a comment and I’ll write a blog on it).

A Cross-site scripting attack is when a “hacker” finds a way to inject extra ‘script’ into the website / tool / service. Reprogramming the website to gain control and access details (i.e the user’s account details), or altering a feature (such as the log in box) to trick users into sharing their data. It’s like someone breaking into your house and leaving little traps around it to trick you.

Protecting for Cross-site scripting will also help add layers of protection for others vulnerabilities. Needless to say, please do it 🙏 think of this as a must not a nice to have. Here are the top (speedy) updates you can make…

  • Add a Content Security Policy (CSP) to your website. It’s a setting that tells the browser not to run any script that doesn’t belong to you. It’s quick (<1 day) to implement. I’ve blogged about CSPs previously, worth a read.
  • Filter any user input i.e anywhere a user would populate form details…etc. Sanitizing the input will ensure only language characters you would expect are included (and gets rid of everything else that could be harmful). Lots of open source solutions available.
  • Do the same for — sanitize — any user output i.e anywhere stored user content (e.g a username) is output onto the page.

Layer 3: Use Machine Learning to automate the ‘hunt’

So far we’ve slowed down Bots and “hackers”, but what about groups of real humans repetitively creating accounts using stolen credit card information at scale ¯\_(ツ)_/¯ (it happens every day, in big numbers).

[Image Source: https://www.freepik.com/vectors/process-automation]

This is where Machine Learning allows us to step up, and build a model (or series of models) to recognise the patterns these groups exhibit and alert us to counterfeit activity. Flagging accounts for Manual review or — if really confident — stop them in their tracks.

  • Trained to detect a bulk of similar accounts being created in one go, using ‘spammy’ content, hashtags and keywords.
  • Trained to spot less obvious patterns such as human (or non-human) behaviours (scroll, tap, time in an area, time to complete a task…) both during account creation and as the “user” starts interfacing with and using your tool.

Mitigation: Fraud detection model(s) [Dev Time: ~6–8 Weeks — depending on how much you implement]

An ML Model is only as good as the data you give it…

  • Start by looking at data unique to you. Identifying features of data you collect (e.g time on a page, user location, time of day, login information….etc) that would be relevant. Labeling the data so the ML Model — when being trained — knows what combination of these features is fraudulent and what is real.
  • There are externally available datasets you can utilize e.g Kaggle Dataset — Credit Card Fraud Detection to enrich yours.

There are CodeLabs out there to help guide you, quick examples…

Look at creating an Incremental Model that can be improved over time based on your feedback i.e as accounts are manually reviewed and we tell the model what was successful.

Thanks for reading. One final take away to remember 🔐 please don’t just stop here. There are a ton more Security layers you could (should) implement, and continue evolving these to stay one step ahead of malicious users. Let me know how you get on.

--

--

Drew Jarrett

Working @Google across SYD & LDN. Developer. Innovative. Problem solver. Passion for making a difference through what I do. Proud Dad of two amazing girls.