Taking over from a failure of an IT company
Taking on a new client is a fairly normal occurrence most of the time. It usually goes decently smooth, getting domain and hardware passwords transferred over, sharing knowledge collected over time, making notes of any gotchya’s or unique issues with a client. Every once in a while though taking over a client leads to a complete horror of horrors in discovering how many things were done wrong and what a dangerous position the previous company had left their now former client in.
I’ve been doing this for a decade now and I thought I’d seen it all, but a recent case has proved to me to never underestimate the ability of someone to royally hose things up.
The original reason we were called in was because they had complained of their server freezing up. They had called their IT people 2 weeks ago and they kept getting put off. They were tired of their server freezing so they called us in. What did we find on arrival? A failing hard drive. Something that could have taken down their entire business, and the former IT company put it off for who knows whatever reason?! The good news was the disk was in a raid array, so they had some redundancy, but the failing disk was still causing the server to hang quite frequently. So, we replaced that and rebuilt the array.
The next issue we discovered during array maintenance, and that was a completely dead battery on the controller. So, we replaced the battery.
Next up, the server wasn’t even on a UPS. It was plugged in to the “surge” side (not the battery side) of a UPS, and the UPS wasn’t even big enough to handle the server anyway. So, we got them an appropriately sized UPS.
So, what if the array had died? What if they had lost power and ended up with corruption from a dead array battery and absent UPS? Well, they could have restored from backups, right? HAHAHA! No, no they couldn’t have. The “cloud” backup they were being charged from their previous company wasn’t even backing up any shared files. All of the business’s proprietary data would’ve been GONE. Their cloud backup was only configured to back up the “Program Files” directory, which would’ve been god damn useless in a disaster recovery situation.
While we’re on the subject of billing for services not being provided, we also found that they were getting charged for website hosting. The problem? Their IT company wasn’t hosting their website. They were hosted at another provider in town. The ONLY service their IT company was hosting was public DNS for their site, yet they were billing them at full website price. Nice little scam they had going there, don’t you think?
I wish I could tell you the horrors stopped here, but they don’t.
After we took over their admin account and logged in, we discovered several exe files used for cracking software. There was a Windows Loader crack package on there. Was this used? I’d say there is a good chance it is. The physical server has serial code on it from Microsoft but for 2003. Their server is running 2008 and we can’t find any license documentation proving it’s a legit copy… so that’s fantastic. We also discovered some Quickbooks keygens and cracks… so they likely don’t have valid Quickbooks licenses either. Awesome! We’ll get them on to properly licensed software as soon as we can, but that will be a slower process for multiple reasons.
The best I’ve saved for last because it’s quite a doozy. All employees were added to the Power Users group. Power users was a security group in SBS2003 (best we can figure, they were migrated to 2008 from SBS2003). Power users group literally gives you full domain access with the only thing you can’t do being physically logon to the server. You get full RDP access, full access to Active Directory, everything. This lead to some very bad things happening on their server which I won’t go in to, but damn. Just what the hell? What on Earth would you ever add every single employee to the Power Users group? The mind boggles in the face of such pure incompetence.
Fortunately they’ve transferred all services away from their previous company and are now with us. We will take good care of them and have fixed all of the awful things we can. Healthy array, new UPS, online backups collecting the correct files, canceled false services, repaired damage caused caused by domain power users, and set up proper account permissions. It’s been an adventure so far, that’s for sure.
I wonder if I can get a list of this company’s other clients… I almost feel morally obligated to fix anyone else’s systems they have ever touched.
Leave a Reply