balena deployment of self-hosted GitHub runners
Runners are deployed in two variants, vm
and container
, where vm
is isolated and safe to use on public repositories.
See github-runner-vm and self-hosted-runners for image sources.
Firecracker allows overprovisioning or oversubscribing of both CPU and memory resources for virtual machines (VMs) running on a host. This means that the total vCPUs and memory allocated to the VMs can exceed the actual physical CPU cores and memory available on the host machine.
In order to make the most efficient use of host resources, we want to slightly overprovision the host hardware so if/when all allocated resources are consumed by jobs (e.g. yocto) there would be minimal overlap that could lead to performance degredation.
See the github-runner-vm README for more.
balenaOS can be deployed into Hetzner Robot
- Order a suitable machine in an
ES rack
(remote power controls) - Download balenaOS production image from the target balenaCloud fleet:
- For x64 only: Unwrap the image
- Copy unwrapped image to S3 playground bucket and make public:
aws s3 cp balena.img s3://{{bucket}}/ --acl public-read
- Activate Hetzner Rescue system
- Reboot or reset server
[!NOTE] This leaves the second block device unpaired and empty
- Download and uncompress unwrapped balenaOS image to
/tmp
usingwget
- (Optional) Zero out target disk(s):
for device in nvme{0,1}n1; do blkdiscard /dev/${device} -f done
- Download image from S3 via wget (URL is in S3 dashboard)
- Write image to disk:
(Check
dd if=balena.img of=/dev/nvme1n1 bs=$(blockdev --getbsz /dev/nvme1n1)
lsblk
output for block device) - Check resulting partitions with
fdisk -l /dev/nvme1n1
- Reboot
- Manually power cycle again via the Robot dashboard to work around this issue
- The machine should provision into the corresponding fleet
[!NOTE] Use
generic-amd64
orgeneric-aarch64
balenaOS device type
- Remove any existing RAID array:
mdadm --stop /dev/md127 mdadm --remove /dev/md127
- Create RAID array:
mdadm --create --verbose /dev/md127 \ --level=1 \ --raid-devices=2 /dev/nvme{0,1}n1 \ --metadata=1.0
- Increase (re)sync speed:
sysctl -w dev.raid.speed_limit_min=500000 sysctl -w dev.raid.speed_limit_max=5000000
- Download image from S3 via wget (URL is in S3 dashboard)
- Write image to RAID array:
dd if=balena.img of=/dev/md127 bs=$(blockdev --getbsz /dev/md127)
- Check resulting partitions with
fdisk -l /dev/md127
- Monitor synchronization progress:
watch cat /proc/mdstat
- Reboot when 100% synchronized
- Manually power cycle again via the Robot dashboard to work around this issue
- The machine should provision into the corresponding fleet