gsmitheidw blog
A place to put some stuff that doesn't suit twitter or other media/code sharing repositories.
Sunday, November 28, 2021
Google Nest (Mini) Cannot delete scheduled tasks
Thursday, April 29, 2021
Dell R510 (also R610,R710 etc) with H700 Perc card and Proxmox with ZFS
The Dell Perc H700 card cannot do pass-through (aka non-RAID aka raw disk aka JBOD). Ideally this is needed for ZFS and other similar technologies including VMware VSAN.
ZFS needs complete and direct access to your disks without going through any middleman RAID controller. However you can create a raid0 volume of each individual disk in raid config ([CTRL] +[R] when booting) and present that. Proxmox will warn that this is a bad idea and rightly so - but it will work and will detect each of those drives and allow you install the the base system as ZFS on those.
You may prefer to segregate your boot from your data - ie: have a RAID1 mirror with possibly ext4 for Proxmox itself and then have the rest as raid0 presented drives formatted with ZFS.
This process of having drives presented as individual RAID0 instances isn't supported by ZFS or Proxmox but oddly is supported in VMware VSAN. Another difference which is interesting is that with VSAN mixed mode is not supported - say with a Dell Perc H730 in R730 for example, you can put your OS (esxi) on a RAID1 mirror and then leave the remaining drives as raw pass-through. VSAN will work, but it isn't recommended or supported.
For Proxmox storage with ZFS and to some extent also with VMware VSAN, the best option is avoid hardware RAID for data drives and get a HBA card for attaching those. If you haven't a HBA card you may be able to re-flash a RAID card but unfortunately this is not possible for the Dell Perc H700.
Proxmox ZFS transfer vs LVM, progress
If you're using LVM to migrate VMs between nodes you won't get any progress reported, with ZFS you get a percentage complete which is useful. WIth LVM migration between nodes, all you can do is issue "pkill -USR1 dd" on the terminal and it'll tell you how many GB are transferred at that moment, but not a percentage so it isn't easy to determine how long it'll take.
Tuesday, June 19, 2018
Horizon View Dell OpenManage manual install via esxcli
If you have Dell OpenManage vib and wish to install it via the shell, you'll see something like this on various sites and blogs:
esxcli software vib install -d OM-Srv....zip
in some older articles you'll see --depot instead of -d as well
It is quite fussy about paths, it won't work even if you're in the directory containing the zip file, you still need to reference the full path. I had a 4 hosts to run this on so I copied from shared vSAN data store to /tmp and ran it on each as so:
esxcli software vib install -d=/tmp/OM-SrvAdmin-Dell-Web-9.1.0-2757.VIB-ESX65i_A00
Note I've added in an equals sign. I suggest putting a space before OM and hitting tab, as it won't otherwise autocomplete which is annoying. Then delete the space after using ctrl and left cursor to jump back quickly.
Hope this saves somebody a few mins of messing. There is a method for adding all of this into vCenter as a converged package but it's a licenced paid extra from Dell. There's also a graphical method of doing this via Update Manager in ESXi using an online pull method from http://vmwaredepot.dell.com/ which probably scales better but it may force automatic maintenance mode where that isn't strictly required to install this vib.
Tuesday, March 08, 2016
Bad usernames
I have a server with public facing sshd (stuck on standard port 22). Running this gives a rough idea of what default usernames are to avoid:
lastb -w | awk -F " " '{print $1}' | sort -n | uniq -c | sort -n -r | more
39 admin 29 oracle 21 test 9 user 9 postgres 8 guest 8 git 8 a 7 nagios 4 ubuntu 4 ftpuser 3 redmine 3 office 3 developer 3 b 3 alias 3 ADMIN 2 zhangyan 2 www 2 vyatta 2 ucpss 2 ubnt 2 tomcat 2 teamspeak3 2 teamspeak 2 steam
What else can be done besides passwordless ssh key only encryption for access? Well limiting exposure with fail2ban slows down any possibility of brute forcing and whitelisting by geo-locating IP addresses may help too: http://www.axllent.org/docs/view/ssh-geoip/
And of course only open the ports needed via firewall etc.