Jump to content

Recommended Posts

Posted

Man, one of those things like cancer for a family member. Never see it coming.

 

Long story long, got a text this morning from my floor manager at the factory he couldnt get in the SAP. Told him to try another client. Same. Then I tried to log in from my phone to the admin account and the PW was changed. Used my secret to everybody else admin and saw a skull and crossbones image on the server. All encrypted. Got damn here we go.

 

Throw on a hat a drive to the office. Done. They want money. Luckily my server is backed up off site and we will be fine buy likely cant bill until thursday.

 

I dont want to prosecute this person, i want to use an aluminum bat with 3 free swings on this lowlife piece of shit.

 

For those computer dudes who understand this, it looks like a brute force attack on an open port. Got locked down.

 

IF we get up and going tomorrow only a 20k loss. I'm guessing thursday at best. F hackers. I want to murder them.

  • Like 6
Posted (edited)
28 minutes ago, markstanco said:

For those computer dudes who understand this, it looks like a brute force attack on an open port. Got locked down.

That's a big ol OOF of an unforced error by your IT and security orgs. How in the flying fuck do you leave unnecessary ports open in this day and age, or have such poor network segmentation that a publicly accessible box has access to the rest of your network to allow an attacker to propogate?

Edited by Captainant
  • Like 4
Posted
That's a big ol OOF of an unforced error by your IT and security orgs. How in the flying fuck do you leave unnecessary ports open in this day and age, or have such poor network segmentation that a publicly accessible box has access to the rest of your network to allow an attacker to propogate?
Looking in to this. These ports were instructed by SAP on our sonicwall.
Posted

Ouch, yeah attacks have really escalated over the last year and are up even more so far in 2020.  Now is not a time to skimp on infrastructure protection.  Some of the shit we’re seeing is extremely sophisticated, they aren’t your everyday brute force bots anymore.

  • Like 1
Posted

Everyone is being scanned you just got unlucky for having something open and not patched.It's how it is.

But Most likely one of your staff downloaded something from clicking on on shit from an email

Save all your log files

Posted
2 minutes ago, markstanco said:
6 minutes ago, Captainant said:
That's a big ol OOF of an unforced error by your IT and security orgs. How in the flying fuck do you leave unnecessary ports open in this day and age, or have such poor network segmentation that a publicly accessible box has access to the rest of your network to allow an attacker to propogate?

Looking in to this. These ports were instructed by SAP on our sonicwall.

Depending on the size of your SAP installation, I wouldn't be surprised if it was Swiss cheese. It's a dangerous intersection of business flow and technical consequences for fast fixes. 

Trying to integrate an external system or data source and need it done yesterday? Just open some more ports to all inbound IPs, rather than a narrow range. Happens all. the. time. And since the business needs SAP to run, they can pressure IT to break best practices for the sake of being a team player. 

Sorry to hear about your pain, but glad to hear y'all at least have a DR plan in place so you're not totally hosed. 

  • Like 1
Posted
Depending on the size of your SAP installation, I wouldn't be surprised if it was Swiss cheese. It's a dangerous intersection of business flow and technical consequences for fast fixes. 
Trying to integrate an external system or data source and need it done yesterday? Just open some more ports to all inbound IPs, rather than a narrow range. Happens all. the. time. And since the business needs SAP to run, they can pressure IT to break best practices for the sake of being a team player. 
Sorry to hear about your pain, but glad to hear y'all at least have a DR plan in place so you're not totally hosed. 
Pretty obviously you are an it guy. I'm not. I really hope our backup of the db is good. I think it is, 10p last night was the last image, they hacked us at 2:51a today. We will see. Of course they cleared the local backup I change daily.
Posted

Threw a lost on LI early and found out a huge customer (about 200mm in sales) got knocked down by this in june. If they want in they will get in. Target and apparently home depot (or was it lowes?) Agrees.

Posted
24 minutes ago, markstanco said:

Pretty obviously you are an it guy. I'm not. I really hope our backup of the db is good. I think it is, 10p last night was the last image, they hacked us at 2:51a today. We will see. Of course they cleared the local backup I change daily.

I've never been hacked like that, so I can't say I know what that feels like, but I can tell you it's a really shitting feeling finding out that all the backups for a business critical app are bad.  We had a yearly security audit and realized that for 6 months all the DB backups were zero length.  Not cool man. Very not cool.  Hopefully your DBAs have actually checked that the backups were actually running. 

 

  • Like 1
Posted
6 hours ago, Updawg said:

Everyone is being scanned you just got unlucky for having something open and not patched.It's how it is.

But Most likely one of your staff downloaded something from clicking on on shit from an email

Save all your log files

Hmm... NowThis' long game? 

Posted

Your Privileged uSer Access Controls suck. You should have tools in place that deny access based on policy and your SAP data should be backed to a secure site and encrypted so even if they exfil your data...who cares. 

Posted

I’m not going to expose the name, but a major investment firm handling retirement funds in the area got hit with a ransomeware attack two or three years ago. Apparently someone foolishly opened a fishy email on their work computer out of curiosity.(Amazed that this still happens) They shut shit down for two or three days while IT worked it. I’m not privy to how it was handled, but there was some serious puckering going on.

Posted

Client of ours got hit on Monday.  USD$1.5m ask.  They are in financial services and have a pretty sophisticated IT group (or so we think).   We train and train and train, but still have employees fail our internal tests. 

Posted
I've never been hacked like that, so I can't say I know what that feels like, but I can tell you it's a really shitting feeling finding out that all the backups for a business critical app are bad.  We had a yearly security audit and realized that for 6 months all the DB backups were zero length.  Not cool man. Very not cool.  Hopefully your DBAs have actually checked that the backups were actually running. 
 


This is why I ditched backup software many years ago in favor of duplicate copies of data stored offsite securely. We had an Issue migrating to Office 365 where it blew away domain user’s local profile so they only had emails from before the Mx record move. No big deal to me, I think, as I can just restore their mailboxes from the backups that the software says have successfully run for several years now. Well, how do you know if your backups are actually good until you try to restore them? You don’t. It ended up that the backups were corrupt and couldn’t be restored. I ended up having to go to each user’s old local profile, copy their .ost file, convert it to .pst, and then import it into their new inbox of office 365. It was a pain in the ass and a painful lesson to learn. I immediately ditched backup software and invested in inexpensive hardware to duplicate all of our data. Storage is so cheap right now it makes no sense to rely on unreliable backup solutions.
Posted
3 hours ago, Texas_Rocks said:

I’m not going to expose the name, but a major investment firm handling retirement funds in the area got hit with a ransomeware attack two or three years ago. Apparently someone foolishly opened a fishy email on their work computer out of curiosity.(Amazed that this still happens) They shut shit down for two or three days while IT worked it. I’m not privy to how it was handled, but there was some serious puckering going on.

Sounds like our company, we got hit about 2 summers ago, completely shut down for almost a week, didn't even have us bother to come into work for 3 days, due to nothing working.  And as many stupid people as we have here, I wouldn't doubt if that's how ours happened...

Posted
1 hour ago, DFWTexEx said:

Client of ours got hit on Monday.  USD$1.5m ask.  They are in financial services and have a pretty sophisticated IT group (or so we think).   We train and train and train, but still have employees fail our internal tests. 

We have more fail than pass.

Posted
1 hour ago, HRSchenker said:

How do ransomware attacks originate? Is it really as simple as a phishing email? 

Your sonicwall is all swiss cheese and that's no gooda.

  • Like 2
Posted
1 hour ago, HRSchenker said:

How do ransomware attacks originate? Is it really as simple as a phishing email? 

Probably 99% of the attacks come in this way.

1 hour ago, Hate said:

 


This is why I ditched backup software many years ago in favor of duplicate copies of data stored offsite securely. We had an Issue migrating to Office 365 where it blew away domain user’s local profile so they only had emails from before the Mx record move. No big deal to me, I think, as I can just restore their mailboxes from the backups that the software says have successfully run for several years now. Well, how do you know if your backups are actually good until you try to restore them? You don’t. It ended up that the backups were corrupt and couldn’t be restored. I ended up having to go to each user’s old local profile, copy their .ost file, convert it to .pst, and then import it into their new inbox of office 365. It was a pain in the ass and a painful lesson to learn. I immediately ditched backup software and invested in inexpensive hardware to duplicate all of our data. Storage is so cheap right now it makes no sense to rely on unreliable backup solutions.

 

That's one of the oldest jokes in IT.  Backups never fail.  Restores, on the other hand...

Posted

Yes, phishing is the main way these groups are getting in.  No matter how robust your security strategy is, humans are the weakest link.  All it takes is one person to click on the link or attachment and get a trojan like Emotet/TrickBot installed on their system, and the hackers are in the network.  From there, they move laterally and do credential gathering with the end goal to obtain domain admin creds (in the case of windows networks).  If they are successful, you best have your backups or duplicate copies locked safely away because they will actively seek out and destroy your backups.

These guys can be inside the network for months going undetected.  They are very good at covering their tracks.

Posted

IT is always big about telling employees to stay away from phishing emails, but then you get an email from HR telling you to select your benefits on a 3rd party site that uses single sign-on but requires you to type in your password.

  • Like 1
Posted
1 hour ago, shakahorn said:

Probably 99% of the attacks come in this way.

That's one of the oldest jokes in IT.  Backups never fail.  Restores, on the other hand...

Since I'm not an IT guy, I need some help with backups. Currently, I use an internet based backup service for our small office. My server has two hard drives that were not set up as RAID. I thought Windows had a program to auto backup to the other drive but I didn't see it on Windows server 2019. What is the best way to have a secondary backup system in case the service fails? How do you check to see if the backup service actually works without waiting for a catastrophe. About the only way I can think of doing it is to take an extra computer with the patient management system installed but not on the network and restoring on there (Anything on the network would automatically try to sync the data so we could lose work). Taking an external drive back and forth is too much of a PITA.

Posted (edited)

@Bevo RAID, in various arrays, helps prevent against data loss in the event of one/multiple hard drive failures, it's an entirely separate thing.  With 2 drives you could set up RAID 1, or Mirrored drives.  One could die and you could replace it and the RAID controller would rebuild the failed drive.

Why in the hell do any companies have on-premise servers anymore for ANYTHING?

disclaimer: I sell bandwidth/networks for a living, I get it, not everyone has budget for wire speed fiber connectivity, but it's a hell of a lot simpler to secure data centers/cloud platforms against the kind of bandits that hit Stanco today.

I've seen probably a dozen SMBs get hit by ransomware assholes.

@markstanco did you call your insurance carrier yet?

 

Edited by BearSchlong
  • Like 1
Posted
Since I'm not an IT guy, I need some help with backups. Currently, I use an internet based backup service for our small office. My server has two hard drives that were not set up as RAID. I thought Windows had a program to auto backup to the other drive but I didn't see it on Windows server 2019. What is the best way to have a secondary backup system in case the service fails? How do you check to see if the backup service actually works without waiting for a catastrophe. About the only way I can think of doing it is to take an extra computer with the patient management system installed but not on the network and restoring on there (Anything on the network would automatically try to sync the data so we could lose work). Taking an external drive back and forth is too much of a PITA.


Depending on your budget, it may be wise to see what you can do to move some of your “servers” to Azure or AWS. I wouldn’t have any servers on site at this point anymore. The maintenance, cost, and security vulnerabilities are too much of a pain in the ass these days with the availability of space in the cloud.
  • Like 1
Posted
9 hours ago, HRSchenker said:

How do ransomware attacks originate? Is it really as simple as a phishing email? 

Check out this podcast, it's a profile of a big ransomware player and how the extortion business plan works. 

https://darknetdiaries.com/episode/44/

It's a good explanation that's a little nerdy, but doesn't require deep technical knowledge to understand. It's pretty incredible just how insecure most things are whether due to negligence or by ignorance

Posted
12 minutes ago, Captainant said:

Check out this podcast, it's a profile of a big ransomware player and how the extortion business plan works. 

https://darknetdiaries.com/episode/44/

It's a good explanation that's a little nerdy, but doesn't require deep technical knowledge to understand. It's pretty incredible just how insecure most things are whether due to negligence or by ignorance

I feel like there’s probably some phishing links on that site. Not clicking.

  • Like 4
Posted
[mention=449]Bevo[/mention] RAID, in various arrays, helps prevent against data loss in the event of one/multiple hard drive failures, it's an entirely separate thing.  With 2 drives you could set up RAID 1, or Mirrored drives.  One could die and you could replace it and the RAID controller would rebuild the failed drive.

Why in the hell do any companies have on-premise servers anymore for ANYTHING?

disclaimer: I sell bandwidth/networks for a living, I get it, not everyone has budget for wire speed fiber connectivity, but it's a hell of a lot simpler to secure data centers/cloud platforms against the kind of bandits that hit Stanco today.

I've seen probably a dozen SMBs get hit by ransomware assholes.

[mention=1619]markstanco[/mention] did you call your insurance carrier yet?

 

On your last statement, yes. We wont hit the minimum downtime on our cyber policy.

 

On edit: unless we are still down on tuesday. Should be back up friday, saturday at the latest. (Knock an aluminum bat on the wood knees who did this)

Posted
Since I'm not an IT guy, I need some help with backups. Currently, I use an internet based backup service for our small office. My server has two hard drives that were not set up as RAID. I thought Windows had a program to auto backup to the other drive but I didn't see it on Windows server 2019. What is the best way to have a secondary backup system in case the service fails? How do you check to see if the backup service actually works without waiting for a catastrophe. About the only way I can think of doing it is to take an extra computer with the patient management system installed but not on the network and restoring on there (Anything on the network would automatically try to sync the data so we could lose work). Taking an external drive back and forth is too much of a PITA.
I'm a small business as well. If you have questions, feel free to PM me your number and I can tell all the info I wished I knew 96 hours ago. I'm not an IT guy either but I have a wealth of knowledge about IT per my IT guy. I can fix shit 9 of 10 times without calling him but this whole thing blew me away.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...