The correct answer, Assess the issue, determine the scope of impact, and remediate the initial problem.

Since, I have software which scans files diffs, I can see the vulnerabilities were injected in Late Oct/Early Nov.

So, I restored a backup from a few weeks prior to that date.

After restoring from the backup, I immediately updated all of the plugins/software, and removed the package which introduced the vulnerability.

Now, at this time, you might be concerned with the security of your homelab.

I am not.

Because I treat my external facing services as honeypots which I expected to get PWNED. As such, if the attacker managed to obtain shell access to the target kubernetes container, the impact was limited, because the pod itself, has ZERO network access to anything, except the internet. It can’t even talk to my internal DNS server. Nothing.

As well, any authentication attempts on my local network, would have been detected by my Log monitoring platform, which would have delivered me an email, letting me know of authentication attempts on my internal servers.

Since, this is a docker/kubernetes container, I am rest easy knowing there are no persistent file system modifications to the container, as it is not persistent. Since, I restored to a backup before file changes were detected, this is more peace of mind.

So, what did I find?

A lot of php files containing very suspicious exec commands, which should not be present. I find lots of lovely obfuscated code checks, which also suspiciously had lovely eval commands.

Why did I make a post on this?

Because a few times a week, I see a post along the lines of…

“HELP MY LAB GOT PWNED AND MY STUFF IS NOW ENCRYPTED. WHAT SHOULD I DO?!?!?!”

I am making this post- because if you follow the recommended practices of having proper backups (3-2-1) rule, you can recover from these issues without breaking a sweat.

Backups, combined with log/authentication monitoring, gives you peace of mind. Properly securing everything, and restricting network access when possible, keeps things from spreading around your network.

Without the proper ACLs/Rules into place, the attacker could have gained access to my network, in which case, containing the damage would be extremely difficult. This is why having a proper DMZ is still crucial for any publicly exposed services.

Log monitoring software, was able to alert me to the presence of an issue. Without this, there would still be who-knows-what trying to run in my old wordpress site, and I would be none the wiser. Although, granted, it took a few weeks for an alarm to trip, which I have already remediated for the future.

Also, wordpress is a vulnerability magnet. Third time in the last 8 years.

  • __ToneBone__@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Wouldn’t it also be better to just host the site on something like AWS? The downside is you have to pay the hosting fee but if I were running a site, I’d rather have it outside of my network. I’m still very inexperienced but that’s my thinking

    • HTTP_404_NotFound@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Depending on the use-case, absolutely.

      For a small site, absolutely.

      I have a few dozen externally exposed projects that I self host though, a few of them are rather resource intensive, which would add up pretty quickly in AWS.

      In my case, keeping everything in an isolated DMZ, handles reducing the risk vastly, as well as completely isolating internet-exposed applications from everything else.