Let me tell you a story….
Like many large organisations nowadays, we offer a secure file transfer service to the internet. This service is used to allow customers and partners to send files to us as well as allowing us to send out files to a variety of partner organisations. This service has to be open to anywhere on the internet because all these official connections can come from absolutely anywhere.
Naturally, an open service comes to the attention of the bad guys fairly quickly nowadays. And if you’re doubtful of this, might I suggest firing up an ssh server, set up a monitor on failed logins on the internet and waiting a few minutes.
Anyway, so we’ve got this service, sitting there, listening on port 22 (standard port for ssh and sftp) that anyone can attempt to logon to…..
Now we accept this may get attacked, it’s the nature of such a service but we use strong usernames and passwords which we set (rather than those connecting), so brute force attempts are not getting anywhere.
Then one day, one of our database administrators gets in touch. “Looks like login attempts are going up on [this service] MFT (managed file transfer). Um…yes. Yes it does.
It should be noted that the DBA isn’t to bothered about someone trying to breach us at this stage, he’s worried he’s going to run out of disk storage. No-one else has told us anything. In fact, we checked the other teams. Security team say everything looks normal….And it does….. (This is number of sessions passing through one of our firewalls) No real change there, in fact it’s dropped right off….
Well this is confusing. Still, something has changed. Lets check with the network team, see if they’ve seen anything unusual. Now again, it should be noted that our network team don’t spend their time constantly checking all 900 trillion incoming packets we get every day. As long as our internet pipe isn’t full (and we have a very fat internet pipe), they’re not going to raise any alerts.
Ahah! Orange boxes show high number of permanent sessions, many packets but no established connection. Attackers are opening up an ssh session and firing lots of login attempts down that tunnel.
Numbers on the left are source IP addresses and sessions in brackets. Where are are they coming from? CHINA! We’re being attacked from CHINA!
(I’m using capitals for effect, unless you’ve been living in a cave, you’ll be aware that state and non-state sponsored attacks from China and Russia overwhelmingly make up the majority of hacking that takes place nowadays. Certainly they’re the biggest hitters on our firewalls by far)
How many then? Peaking at over 500 THOUSAND attempts an hour, this is actually quite serious.
Little sub-story here. When non-operational or non-security people are involved in a security investigation, they are liable to make knee-jerk decisions. Not because they’re idiots or unprofessional but they sometimes lack the experience to understand what will work and what won’t when addressing issues such as this in the heat of the moment. They may also be the loudest in the room and need quieting so off we go on an educational whack-a-mole process of blocking individual IP addresses…
You see, you’re literally on a hiding to nothing thinking the people carrying out these attacks are sat behind a single IP address, typing out login attempts at a rate of 500,000 per hour. This is a botnet (and we ran some checks against some of the sources, they were most commonly, linux boxes or compromised DSL routers with an open ssh port) , it’s a lot more than one device and it is definitely centrally controlled. The instant it realises it’s not getting through from one source, it tries from another, and another, and another. Whack-a-mole isn’t going to work. Geo-blocking might. Bye bye China.
Nope, that’s not worked either although we’ve changed our configuration so we can see the source IP’s further inside the network.
As an aside, we can now also see which accounts are being tried. There’s no root account on this service, but hey, fill your boots.
Where is that IP address based?
Italy? Are they attacking us now?
Of course, it’s not state-sponsored Italy, just part of that botnet. It’s global. This is big, very big. I mean, it’s coming from everywhere.
Our best option now is to let the service see the true source address, turn on DDOS (Distributed Denial of Service) protection and hope that sorts it because I’m almost out of ideas now.
BOOM! That sorted it. The service is now actively maintaining a dynamic block list of source IP’s. If it sees more than 5 failed logins from one source it’ll deny access from that IP for a set period.
So what’s the moral of that story?
Well at first everyone wanted to blame our security operations team for not telling us about this but in fact they’re blameless, nothing looked unusual to them. Maybe if we’d fed them the MFT service logs, they may have spotted this but I know from experience we’d have tried to tune out noise before this so we may have accidentally have blocked their alerts.
The moral in this case is listen to all your IT admin teams even if they’re not telling you about security related incidents. Because they may be telling you about security related incidents without even realising it. What started as a complaint about disk space filling up from a database administrator who *never* gets involved in security, ended as a multi-team security investigation with a concerted effort from a number of SME’s to come to a conclusion.
As a secondary moral, if you have a dedicated security team, let them see as much as possible and let them react if it is a security incident. Chances are they’ll have seen it all before and will understand what works and what won’t. They will provide your most measured response and hopefully will ensure the situation is resolved in the quickest time frame possible.