-March Someone called me “Back into their life” I tried to comply to that request.. started looking for E -Then life turned shit
-Got poisoned by crazy beer and crazy cigarettes.
-my computers got hacked..tried to secure some data by driving to a hospital as phones were compromised.
-Had breakins at home. Stuff stolen. Ended up homeless…
-Drove to hospital 29th april, tried to give them Corp. Data backup tapes and asked them to call the police. They locked me up instead of securing data and decryption key… WTF…
-Met a doppelganger of myself in hospital… WTF…. looked like me, but not entirely. Had same soc. sec number (CPR) hospital staff thought he was me and I was him so they forced me to take his medications and locked me up thinking I was him….
-Was accused of all sorts of shit that I never did but the other guy apparantly did…
-Thought i was going to die.
-Then they let me out after my boss came to save me sorting out my Identity.
-My car was stolen….
-Searched for it for a month.
-My car was recovered…
-Phones hacked, lost gmail, facebook, linked in on/off/temporarily.
-almost 4 month fight to get the above back…
-was told E was missing, searched for months, got back into hospital… wierd stuff happened…. cant find her..
-Computers hacked again… facebook lost AGAIN….
Pretty much sums up the last 6 months.
On the positive side, I haven’t smoked cigarettes for 4 days but I could really use one right now!
For reasons I have yet to fathom, Debian 10 has changed the way the $PATH variable works. It is no longer possible to install a clean install and run admin commands as root without typing /sbin/ before the command.
Notes: For Debian 10: it is not required to remove packages prior to install of 2.0 unless a previous version of ZFS was installed on the machine. Read the last command of this guide before continuing.
Note2: If you are using your proxmox host for GPU passthrough it is adviced to set options zfs l2arc_mfuonly = 1 to not fill the cache with all sorts of crap if you are doing backups. It will most likely trash your SSD unless it is enterprise grade SLC within a few weeks.
Note3: This does NOT work for a proxmox or debian install with a root zfs pool!!!!!!!!!
WARNING: NOT SAFE FOR PRODUCTION
Here are the commands to enter in order for persistent l2arc to work with proxmox (debian based)
-Ryzen 7 3700x -32-64GB of Ram -2x4TB HDD’s -1-2 512GB NVMe’s -1-2 SATA SSD’s -Some GPU
Most users put their Windows 10 on the NVMe’s either in raid1 or using them as two individual devices. The SATA drives are used to store stuff, maybe the mayority of the steam library and normal documents. They might be Raid1
The memory above 16GB is mostly unutilized for the regular consumer without the need for video editing.
Here are a few fun facts about SSD’s: -They are not meant for long term storage. They lose data over time. Essentially the cells are written to once and given a certain electrical charge. This charge is never refreshed unless the cell is overwritten and diminishes over time. HDD’s do not have this problem. This issue is called “de-trapping” and also applies to USB sticks SSD’s fail at the same time if run in RAID1. Both of them will reach their max TBWD at the same time and fail simultaneously, probably within a single week. The HDD’s do not have this problem.
So, what can we do with this information? Well, we have memory, we have SSD’s and we have HDD’s.
What would AngryAdmin do? He would install proxmox on two USB drives and mirror them with ZFS. Configure a mirrored ZFS pool of the two 4TB HDD’s Attach whatever SSD’s are avail to this pool of HDD’s as L2ARC persistent cache. Install windows 10 as a virtual machine in proxmox, passthrough a few USB controllers and the GPU. Give theVM 24GB ram if total mem is 64GB, 16GB if total mem is 32GB. ZFS will use the rest as read cache(ARC) and write cache(ZIL)
Windows, Steam and important data is now stored on a pool consisting of one mirrored vdev backed by whatever SSD’s were present before. These SSD’s are striped in what could resemble RAID0 yielding twice the read performance.
Imagine having two NVM’es backing these 4TB disks.. The data you use most often will be cached on the NVMe and read at twice the speed of a single set of RAID1 NVMe. After windows and important stuff is cached, set options zfs mfuonly=1
Moreover, if the NVMe break, which they will eventually, the data is not lost, it is still stored on the two HDD’s
You now fully utilize the space of your NVMe’s. You speed up the entire system by having the ARC present data to your VM’s when needed. Of course ARC is cleared when you reboot, but persistent L2ARC is not.
You are now utilizing both your memory and your nvme’s and get 98-99% bare metal performance.
If you feel like it, add another GPU, install a 2nd windows, attach a 2nd keyboard and mouse and last but not least, another monitor to the 2nd GPU. have a friend over and play games on the same PC 2 people.
Regular users waste 16GB ram if having 32GB total. They waste SSD capacity by mirroring two drives. This solves that 🙂
My storage system currently looks like this: 6x2TB disks each mirrored to form 3 sets of mirrors. These 3 mirrors are striped “(Raid10) but not really raid10” 2x240GB SSD’s are attached to this pool of 6TB data capacity as 480GB speedy L2ARC cache.
A second pool with a 12 year old 1TB and a relativly new 4TB disk. The first to be replaced soon. The pool will auto-expand to 4TB when I replace the 1TB disk. This pool holds temporary data that is not important but I also do not want the IOPS this data on “storage2” requires interfering with IOPS on “storage”