Considering the cron time format is accepted in more than just runners like cron/anacron, I’d say your answer is reasonable (plus this is comics). No pitchforks here :)
Considering the cron time format is accepted in more than just runners like cron/anacron, I’d say your answer is reasonable (plus this is comics). No pitchforks here :)
The average damage on that is 45… Konsi OP
… What?
I wondered if someone would post that second one.
For the first, I think Square Enix got it right - headphones light the right image, but with the bridge between the ear cups flopped back on their head.
Alternatively, you could have headphones like the first but with the drivers in the upper cat ear portion by their actual ears.
OT but am I the only one that noticed the fox’s headphones aren’t on their ears?
I have five Dell servers in the rack, and another two Dells and three x9? (Atom C2758 8-core if memory serves) Supermicros on the shelf.
I think only one or two of the Dells came with iDRAC Enterprise and all the Supermicros had full licensing. It’s absolutely beautiful (once you get done fighting the software updates to purge the Java gremlins).
My three R730s were upgraded to Enterprise as soon as I had budget and a spare line item to do so. Power on/off is great and console+ISO is peak. I love this.
If you’re looking at Intel, you might be thinking IME/vPro
IPMI (such as iDRAC on Dell) runs off-processor on a different section of the motherboard typically and is installed on AMD servers as well.
What’s the difference between horizontal and vertical integration? (I know a few business words but usually not enough to be intelligent, this is a genuine question of confusion)
What’s this label on? I can tell what you’re supposed to avoid, I’m just curious why the equipment does that
Or (insert MMO of choice)
Well… All three of them
It’s on APNews too - it’s real
Hardware RAID just works, and for many, that’s good enough. In more advanced systems, all its got to handle is a boot partition, and if you’re doing your job as a sysadmin there’s zero important data in there that can’t be easily rebuilt or restored.
I never said I didn’t use software RAID, I just wanted to add information about hardware RAID controllers. Maybe I’m blind, but I’ve never seen a good implementation of software RAID for the EFI partition or boot sector. During boot, most systems I’ve seen will try to always access one partition directly and a second in order, which is bypassing the concept of a RAID, so the two would need to be kept manually in sync during updates.
Because of that, there’s one notable place where I won’t - I always use hardware RAID for at minimum the boot disk because Dell firmware natively understands everything about it from a detect/boot/replace perspective. Or doesn’t see anything at all in a good way. All four of my primary servers have a boot disk on either a Startech RAID card similar to a Dell BOSS or have an array to boot off of directly on the PERC. It’s only enough space to store the core OS.
Other than that, at home all my other physical devices are hypervisors (VMware ESXi for now until I can plot a migration), dedicated appliance devices (Synology DSM uses mdadm), or don’t have a redundant disks (my firewall - backed up to git, and my NUC Proxmox box, both firewalls and the PVE are all running ZFS for features).
Three of my four ESXi servers run vSAN, which is like Ceph and replaces RAID. Like Ceph and ZFS, it requires using an HBA or passthrough disks for full performance. The last one is my standalone server. Notably, ESXi does not support any software RAID natively that isn’t vSAN, so both of the standalone server’s arrays are hardware RAID.
When it comes time to replace that Synology it’s going to be on TrueNAS
For recovering hardware RAID: most guaranteed success is going to be a compatible controller with a similar enough firmware version. You might be able to find software that can stitch images back together, but that’s a long shot and requires a ton of disk space (which you might not have if it’s your biggest server)
I’ve used dozens of LSI-based RAID controllers in Dell servers (of both PERC and LSI name brand) for both work and homelab, and they usually recover the old array to the new controller pretty well, and also generally have a much lower failure rate than the drives themselves (I find myself replacing the cache battery more often than the controller itself)
Only twice out of the handful of times I went to a RAID controller from a different generation
As others have pointed out, this is where backups come into play. If you have to replace the server with one from a different generation, you run the risk that the drives won’t import. At that point, you’d have to sanitize the super block of the array and re-initialize it as a new array, then restore from backup. Now, the array might be just fine and you never notice a difference (like my users that had to replace a failed R815 with an 820), but the result pattern is really to the extremes of work or fault with no in between.
Standalone RAID controllers are usually pretty resilient and fail less often than disks, but they are very much NOT infallible as you are correct to assess. The advantage to software systems like mdadm, ZFS, and Ceph is that it removed the precise hardware compatibility requirements, but by no means does it remove the software compatible requirements - you’ll still have to do your research and make sure the new version is compatible with the old format, or make sure it’s the same version.
All that’s said, I don’t trust embedded motherboard RAIDs to the same degree that I trust standalone controllers. A friend of mine about 8-10 years ago ran a RAID-0 on a laptop that got it’s super block borked when we tried to firmware update the SSDs - stopped detecting the array at all. We did manage to recover data, but it needed multiple times the raw amount of storage to do so.
Was not expecting a direct ref to Volo’s Guide. Love it.
Just because sponsor block exists, doesn’t mean video creators shouldn’t be better.
Just like UBO and web ads.
For Certbot, I think it’s even further up the chain - OpenSSL. And if you’re installing it to Apache or Nginx, its probably just OpenSSL again.
Isn’t venmo owned by PayPal for the past 10y?