trentdocs_website/site/search/search_index.json

654 lines
144 KiB
JSON

{
"docs": [
{
"location": "/",
"text": "Welcome to Trent Docs\n\n\nGit Repo For These Docs\n\n\nObviously, the commit history will reflect the time when these documents are written.\n\n\n\n\nApt Pinning Artful Aardvark Packages in Xenial Xerus\n\n\nLXD Container Home Server Networking For Dummies\n\n\nLXD Container Foo\n\n\nHow To Reassign A Static Ip Address with dnsmasq\n\n\nServe And Share Apps From Your Phone With Fdroid\n\n\nNspawn Containers\n\n\nGentoo LXD Container\n\n\nMastodon on Arch\n\n\nDebian Nspawn Container On Arch For Testing Apache Configurations\n\n\nDynamic Cacheing Nginx Reverse Proxy For Pacman\n\n\nFreeBSD Jails on FreeNAS\n \n\n\nQuick Dirty Redis Nspawn Container on Arch Linux\n\n\nQuick Dirty Postgresql Nspawn Container on Arch Linux\n\n\nMisc Tips, Trouble Shooting\n\n\nSelf Signed Certs\n\n\nSelfoss on Centos7\n\n\nStupid Package Manager Tricks\n\n\nStupid KVM Tricks",
"title": "Home"
},
{
"location": "/#welcome-to-trent-docs",
"text": "",
"title": "Welcome to Trent Docs"
},
{
"location": "/#git-repo-for-these-docs",
"text": "Obviously, the commit history will reflect the time when these documents are written. Apt Pinning Artful Aardvark Packages in Xenial Xerus LXD Container Home Server Networking For Dummies LXD Container Foo How To Reassign A Static Ip Address with dnsmasq Serve And Share Apps From Your Phone With Fdroid Nspawn Containers Gentoo LXD Container Mastodon on Arch Debian Nspawn Container On Arch For Testing Apache Configurations Dynamic Cacheing Nginx Reverse Proxy For Pacman FreeBSD Jails on FreeNAS Quick Dirty Redis Nspawn Container on Arch Linux Quick Dirty Postgresql Nspawn Container on Arch Linux Misc Tips, Trouble Shooting Self Signed Certs Selfoss on Centos7 Stupid Package Manager Tricks Stupid KVM Tricks",
"title": "Git Repo For These Docs"
},
{
"location": "/apt_pinning_artful_aardvark_packages_in_xenial_xerus/",
"text": "Apt Pinning Artful Aardvark Packages in Xenial Xerus\n\n\nYou want to set up apt-pinning so that you can explicitly install packages from\n\nartful\n, on your \nxenial\n machine, but you also want to be able to issue the command\n\napt-get dist-upgrade\n and have nothing automatically upgrade from \nxenial\n to \nartful\n.\n\n\nIn order to get this to work you have to edit three files. The first file is\n\n/etc/apt/sources.list\n. Make a double length version of the file, with the second\nhalf of the file describing the \nartful\n equivalent of the \nxenial\n repos.\nLike this.\n\n\n# /etc/apt/sources.list\ndeb http://archive.ubuntu.com/ubuntu xenial main restricted\ndeb-src http://archive.ubuntu.com/ubuntu xenial main restricted\n\ndeb http://archive.ubuntu.com/ubuntu xenial-updates main restricted\ndeb-src http://archive.ubuntu.com/ubuntu xenial-updates main restricted\n\ndeb http://archive.ubuntu.com/ubuntu xenial universe\ndeb-src http://archive.ubuntu.com/ubuntu xenial universe\ndeb http://archive.ubuntu.com/ubuntu xenial-updates universe\ndeb-src http://archive.ubuntu.com/ubuntu xenial-updates universe\n\ndeb http://archive.ubuntu.com/ubuntu xenial multiverse\ndeb-src http://archive.ubuntu.com/ubuntu xenial multiverse\ndeb http://archive.ubuntu.com/ubuntu xenial-updates multiverse\ndeb-src http://archive.ubuntu.com/ubuntu xenial-updates multiverse\n\ndeb http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse\ndeb-src http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse\n\ndeb http://security.ubuntu.com/ubuntu xenial-security main restricted\ndeb-src http://security.ubuntu.com/ubuntu xenial-security main restricted\ndeb http://security.ubuntu.com/ubuntu xenial-security universe\ndeb-src http://security.ubuntu.com/ubuntu xenial-security universe\ndeb http://security.ubuntu.com/ubuntu xenial-security multiverse\ndeb-src http://security.ubuntu.com/ubuntu xenial-security multiverse\n\n## Uncomment the following two lines to add software from Canonical's\n## 'partner' repository.\n## This software is not part of Ubuntu, but is offered by Canonical and the\n## respective vendors as a service to Ubuntu users.\n# deb http://archive.canonical.com/ubuntu xenial partner\n# deb-src http://archive.canonical.com/ubuntu xenial partner\n\ndeb http://archive.ubuntu.com/ubuntu artful main restricted\ndeb-src http://archive.ubuntu.com/ubuntu artful main restricted\n\ndeb http://archive.ubuntu.com/ubuntu artful-updates main restricted\ndeb-src http://archive.ubuntu.com/ubuntu artful-updates main restricted\n\ndeb http://archive.ubuntu.com/ubuntu artful universe\ndeb-src http://archive.ubuntu.com/ubuntu artful universe\ndeb http://archive.ubuntu.com/ubuntu artful-updates universe\ndeb-src http://archive.ubuntu.com/ubuntu artful-updates universe\n\ndeb http://archive.ubuntu.com/ubuntu artful multiverse\ndeb-src http://archive.ubuntu.com/ubuntu artful multiverse\ndeb http://archive.ubuntu.com/ubuntu artful-updates multiverse\ndeb-src http://archive.ubuntu.com/ubuntu artful-updates multiverse\n\ndeb http://archive.ubuntu.com/ubuntu artful-backports main restricted universe multiverse\ndeb-src http://archive.ubuntu.com/ubuntu artful-backports main restricted universe multiverse\n\ndeb http://security.ubuntu.com/ubuntu artful-security main restricted\ndeb-src http://security.ubuntu.com/ubuntu artful-security main restricted\ndeb http://security.ubuntu.com/ubuntu artful-security universe\ndeb-src http://security.ubuntu.com/ubuntu artful-security universe\ndeb http://security.ubuntu.com/ubuntu artful-security multiverse\n\n\n\n\nNow create a new file \n/etc/apt/preferences.d/xenial\n with the\nfollowing content.\n\n\nPackage: *\nPin: release a=xenial\nPin-Priority: 900\n\n\n\n\nAnd create one more file \n/etc/apt/preferences.d/artful\n with the\nfollowing content.\n\n\nPackage: *\nPin: release a=artful\nPin-Priority: 300\n\n\n\n\nActually, I'm not entirely certain these are the optimal apt-pinning\npriority numbers. There's a little bit of art to apt-pinning.\n\n\nSo you can verify that nothing will automatically upgrade with the\nfollowing command.\n\n\n# the result of this command should be that nothing upgrades\napt-get dist-upgrade\n\n\n\n\nBut let's suppose that you want to explicitly install a package, and\nhopefully the upgraded dependancies which it needs from \nartful\n.\n\napt-cache madison\n is a useful command.\n\n\napt-cache madison weather-util\n# outputs the following\nweather-util | 2.3-2 | http://archive.ubuntu.com/ubuntu artful/universe amd64 Packages\nweather-util | 2.0-1 | http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages\nweather-util | 2.0-1 | http://archive.ubuntu.com/ubuntu xenial/universe Sources\nweather-util | 2.3-2 | http://archive.ubuntu.com/ubuntu artful/universe Sources\n\n\n\n\nAs you can see, two different version of \nweather-util\n are available (as\nwell as two different source versions), one each from the \nxenial\n,\nand the \nartful\n repos.\n\n\nBut if you type \napt-get install weather-util\n, the old version from the \nxenial\n\nrepo will be installed. The intended behaviour is entirely a matter of getting\nthe apt-pinning priority numbers correct.\n\n\nTo explicitly install the newer version of \nweather-util\n, and perhaps more\nimportantly it's upgraded \nweather-util-data\n dependancy, use the following command.\n\n\napt-get -t artful install weather-util\n\n\n\n\nBut hold on, HOLD ON! The above command doesn't actually confirm what version is\ngoing to be installed, and you'd like to have one last look things over, so add\nthe \n-V\n flag to your \napt-get\n command.\n\n\nroot@xhost:~# apt-get -t artful install weather-util -V\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\nThe following additional packages will be installed:\n weather-util-data (2.3-2)\nThe following NEW packages will be installed:\n weather-util (2.3-2)\n weather-util-data (2.3-2)\n 0 upgraded, 2 newly installed, 0 to remove and 389 not upgraded.\n Need to get 0 B/3375 kB of archives.\n After this operation, 3557 kB of additional disk space will be used.\n Do you want to continue? [Y/n] \n\n\n\n\nThat's what you're looking for.",
"title": "Apt Pinning Artful Aardvark Packages in Xenial Xerus"
},
{
"location": "/apt_pinning_artful_aardvark_packages_in_xenial_xerus/#apt-pinning-artful-aardvark-packages-in-xenial-xerus",
"text": "You want to set up apt-pinning so that you can explicitly install packages from artful , on your xenial machine, but you also want to be able to issue the command apt-get dist-upgrade and have nothing automatically upgrade from xenial to artful . In order to get this to work you have to edit three files. The first file is /etc/apt/sources.list . Make a double length version of the file, with the second\nhalf of the file describing the artful equivalent of the xenial repos.\nLike this. # /etc/apt/sources.list\ndeb http://archive.ubuntu.com/ubuntu xenial main restricted\ndeb-src http://archive.ubuntu.com/ubuntu xenial main restricted\n\ndeb http://archive.ubuntu.com/ubuntu xenial-updates main restricted\ndeb-src http://archive.ubuntu.com/ubuntu xenial-updates main restricted\n\ndeb http://archive.ubuntu.com/ubuntu xenial universe\ndeb-src http://archive.ubuntu.com/ubuntu xenial universe\ndeb http://archive.ubuntu.com/ubuntu xenial-updates universe\ndeb-src http://archive.ubuntu.com/ubuntu xenial-updates universe\n\ndeb http://archive.ubuntu.com/ubuntu xenial multiverse\ndeb-src http://archive.ubuntu.com/ubuntu xenial multiverse\ndeb http://archive.ubuntu.com/ubuntu xenial-updates multiverse\ndeb-src http://archive.ubuntu.com/ubuntu xenial-updates multiverse\n\ndeb http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse\ndeb-src http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse\n\ndeb http://security.ubuntu.com/ubuntu xenial-security main restricted\ndeb-src http://security.ubuntu.com/ubuntu xenial-security main restricted\ndeb http://security.ubuntu.com/ubuntu xenial-security universe\ndeb-src http://security.ubuntu.com/ubuntu xenial-security universe\ndeb http://security.ubuntu.com/ubuntu xenial-security multiverse\ndeb-src http://security.ubuntu.com/ubuntu xenial-security multiverse\n\n## Uncomment the following two lines to add software from Canonical's\n## 'partner' repository.\n## This software is not part of Ubuntu, but is offered by Canonical and the\n## respective vendors as a service to Ubuntu users.\n# deb http://archive.canonical.com/ubuntu xenial partner\n# deb-src http://archive.canonical.com/ubuntu xenial partner\n\ndeb http://archive.ubuntu.com/ubuntu artful main restricted\ndeb-src http://archive.ubuntu.com/ubuntu artful main restricted\n\ndeb http://archive.ubuntu.com/ubuntu artful-updates main restricted\ndeb-src http://archive.ubuntu.com/ubuntu artful-updates main restricted\n\ndeb http://archive.ubuntu.com/ubuntu artful universe\ndeb-src http://archive.ubuntu.com/ubuntu artful universe\ndeb http://archive.ubuntu.com/ubuntu artful-updates universe\ndeb-src http://archive.ubuntu.com/ubuntu artful-updates universe\n\ndeb http://archive.ubuntu.com/ubuntu artful multiverse\ndeb-src http://archive.ubuntu.com/ubuntu artful multiverse\ndeb http://archive.ubuntu.com/ubuntu artful-updates multiverse\ndeb-src http://archive.ubuntu.com/ubuntu artful-updates multiverse\n\ndeb http://archive.ubuntu.com/ubuntu artful-backports main restricted universe multiverse\ndeb-src http://archive.ubuntu.com/ubuntu artful-backports main restricted universe multiverse\n\ndeb http://security.ubuntu.com/ubuntu artful-security main restricted\ndeb-src http://security.ubuntu.com/ubuntu artful-security main restricted\ndeb http://security.ubuntu.com/ubuntu artful-security universe\ndeb-src http://security.ubuntu.com/ubuntu artful-security universe\ndeb http://security.ubuntu.com/ubuntu artful-security multiverse Now create a new file /etc/apt/preferences.d/xenial with the\nfollowing content. Package: *\nPin: release a=xenial\nPin-Priority: 900 And create one more file /etc/apt/preferences.d/artful with the\nfollowing content. Package: *\nPin: release a=artful\nPin-Priority: 300 Actually, I'm not entirely certain these are the optimal apt-pinning\npriority numbers. There's a little bit of art to apt-pinning. So you can verify that nothing will automatically upgrade with the\nfollowing command. # the result of this command should be that nothing upgrades\napt-get dist-upgrade But let's suppose that you want to explicitly install a package, and\nhopefully the upgraded dependancies which it needs from artful . apt-cache madison is a useful command. apt-cache madison weather-util\n# outputs the following\nweather-util | 2.3-2 | http://archive.ubuntu.com/ubuntu artful/universe amd64 Packages\nweather-util | 2.0-1 | http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages\nweather-util | 2.0-1 | http://archive.ubuntu.com/ubuntu xenial/universe Sources\nweather-util | 2.3-2 | http://archive.ubuntu.com/ubuntu artful/universe Sources As you can see, two different version of weather-util are available (as\nwell as two different source versions), one each from the xenial ,\nand the artful repos. But if you type apt-get install weather-util , the old version from the xenial \nrepo will be installed. The intended behaviour is entirely a matter of getting\nthe apt-pinning priority numbers correct. To explicitly install the newer version of weather-util , and perhaps more\nimportantly it's upgraded weather-util-data dependancy, use the following command. apt-get -t artful install weather-util But hold on, HOLD ON! The above command doesn't actually confirm what version is\ngoing to be installed, and you'd like to have one last look things over, so add\nthe -V flag to your apt-get command. root@xhost:~# apt-get -t artful install weather-util -V\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\nThe following additional packages will be installed:\n weather-util-data (2.3-2)\nThe following NEW packages will be installed:\n weather-util (2.3-2)\n weather-util-data (2.3-2)\n 0 upgraded, 2 newly installed, 0 to remove and 389 not upgraded.\n Need to get 0 B/3375 kB of archives.\n After this operation, 3557 kB of additional disk space will be used.\n Do you want to continue? [Y/n] That's what you're looking for.",
"title": "Apt Pinning Artful Aardvark Packages in Xenial Xerus"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/",
"text": "LXD Container Home Server Networking For Dummies\n\n\nWhy?\n\n\nIf you're going to operate a fleet of LXD containers for home\nentertainment, you probably want some of them exposed with their\nown ip addresses on your home network, so that you can use them\nas containerized servers for various applications.\n\n\nOthers containers, you might want to be inaccessable from the lan,\nin a natted subnet, where they can solicit connections to the\noutside world from within their natted subnet, but are not addressable\nfrom the outside. A database server that you connect a web app to, for\ninstance, or a web app that you have a reverse proxy in front of.\n\n\nBut these are two separate address spaces, so ideally all of the containers\nwould have a second interface of their own, by which they could connect\nto a third network, that would be a private network that all of the containers\ncan use to talk directly to each other (or the host machine).\n\n\nIt's pretty straightforward, you just have to glue all the pieces together.\n\n\nThree Part Overview.\n\n\n\n\n\n\nDefine and create some bridges. \n\n\n\n\n\n\nDefine profiles that combine the network\ninterfaces in different combinations. In addition to two\nbridges you will have a macvlan with which to expose the containers\nthat you want exposed, but the macvlan doesn't come into\nplay until here in step two when you define profiles. \n\n\n\n\n\n\nAssign each container which profile it should use,\nand then configure the containers to use the included\nnetwork interfaces correctly. \n\n\n\n\n\n\nBuild Sum Moar Bridges\n\n\nThe containers will all have two network interfaces from\ntheir own internal point of view, \neth0\n and \neth1\n. \n\n\nIn this\nscheme we create a bridge for a natted subnet and a bridge for\na non-natted subnet. All of the containers will connect to the\nnon-natted subnet on their second interface, \neth1\n, and some\nof the containers will connect to the natted subnet on their \nfirst interface \neth0\n. The containers that don't connect\nto the natted subnet will instead connect to a macvlan\non their first interface \neth0\n, but that isn't part of this\nstep.\n\n\nbridge for a natted subnet\n\n\nIf you haven't used lxd before, you'll want to run the command \nlxd init\n.\nBy default this creates exactly the bridge we want, called \nlxdbr0\n.\n\n\nOtherwise you would use the following command to create \nlxdbr0\n.\n\n\nlxc network create lxdbr0\n\n\n\n\nTo generate a table of all the existing interfaces.\n\n\nlxd network list\n\n\n\n\nThis bridge is for our natted subnet, so we just want to go with\nthe default configuration.\n\n\nlxc network show lxdbr0\n\n\n\n\nThis cats a yaml file where you can see the randomly\ngenerated network for \nlxdbr0\n.\n\n\nconfig:\n ipv4.address: 10.99.153.1/24\n ipv4.nat: \"true\"\n ipv6.address: fd42:211e:e008:954b::1/64\n ipv6.nat: \"true\"\ndescription: \"\"\nname: lxdbr0\ntype: bridge\nused_by: []\nmanaged: true\n\n\n\n\nbridge for a non-natted subnet\n\n\nCreate \nlxdbr1\n\n\nlxc network create lxdbr1\n\n\n\n\nUse the following commands to remove nat from \nlxdbr1.\n\n\nlxc network set lxdbr1 ipv4.nat false\nlxc network set lxdbr1 ipv6.nat false\n\n\n\n\nOf if you use this next command, your favourite\ntext editor will pop open, preloaded with the complete yaml file\nand you can edit the configuration there.\n\n\nlxc network edit lxdbr1\n\n\n\n\nEither way you're looking for a result such as the following.\nNotice that the randomly generated address space is different\nthat the one for \nlxdbr0\n, and that the *nat keys are set\nto \"false\".\n\n\nconfig:\n ipv4.address: 10.151.18.1/24\n ipv4.nat: \"false\"\n ipv6.address: fd42:89d4:f465:1b20::1/64\n ipv6.nat: \"false\"\ndescription: \"\"\nname: lxdbr1\ntype: bridge\nused_by: []\nmanaged: true\n\n\n\n\nProfiles\n\n\nrecycle the default\n\n\nWhen you first ran \nlxd init\n, that created a default profile.\nConfirm with the following.\n\n\nlxc profile list\n\n\n\n\nTo see what the default profile looks like.\n\n\nlxc profile show default\n\n\n\n\nconfig:\n environment.http_proxy: \"\"\n security.privileged: \"true\"\n user.network_mode: \"\"\ndescription: Default LXD profile\ndevices:\n eth0:\n nictype: bridged\n parent: lxdbr0\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: default\nused_by: []\n\n\n\n\nprofile the natted\n\n\nThe easiest way to create a new profile is start by copying another one.\n\n\nlxc profile copy default natted\n\n\n\n\nedit the new \nnatted\n profile\n\n\nlxc profile edit natted\n\n\n\n\nAnd add an \neth1\n interface attached to \nlxdbr1\n. \neth0\n and \neth1\n will\nbe the interfaces visible from the container's point of view.\n\n\nconfig:\n environment.http_proxy: \"\"\n security.privileged: \"true\"\n user.network_mode: \"\"\ndescription: Natted LXD profile\ndevices:\n eth0:\n nictype: bridged\n parent: lxdbr0\n type: nic\n eth1:\n nictype: bridged\n parent: lxdbr1\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: natted\nused_by: []\n\n\n\n\nAny container assigned to the \nnatted\n profile, will have an interface \neth0\n connected\nto a natted subnet, and a second interface \neth1\n connected to a non-natted subnet, with\na static ip on which it will be able to talk directly to the other containers and the host\nmachine.\n\n\nprofile the exposed\n\n\nCreate the \nexposed\n profile\n\n\nlxc profile copy natted exposed\n\n\n\n\nand edit the new \nexposed\n profile\n\n\nlxc profile edit exposed\n\n\n\n\nchange the nictype for \neth0\n from \nbridged\n to \nmacvlan\n, and the parent should be\nthe name of the physical ethernet connection on the host machine, instead of a bridge.\n\n\nconfig:\n environment.http_proxy: \"\"\n security.privileged: \"true\"\n user.network_mode: \"\"\ndescription: Exposed LXD profile\ndevices:\n eth0:\n nictype: macvlan\n parent: eno1\n type: nic\n eth1:\n nictype: bridged\n parent: lxdbr1\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: exposed\nused_by: []\n\n\n\n\nAny container assigned to the \nexposed\n profile, will have an interface \neth0\n connected\nto a macvlan, addressable from your lan, just like any other arbitrary computer on\nyour home network, and a second interface \neth1\n connected to a non-natted subnet, with\na static ip on which it will be able to talk directly to the other containers and the host\nmachine.\n\n\nexposed profile with a regular linux br0 interface bridge\n\n\nYou can configure an Ubuntu server with a br0 interface\n\n\n# /etc/network/interfaces\nauto lo\niface lo inet loopback\n\n# br0 bridge in dhcp configuration with ethernet\n# port ens2 added to it.\nauto br0\niface br0 inet dhcp\n bridge_ports ens2\n bridge_stp off\n bridge_maxwait 0\n\n\n\n\nand a cooresponding profile....\n\n\nconfig: {}\ndescription: exposed LXD profile\ndevices:\n eth0:\n nictype: bridged\n parent: br0\n type: nic\n eth1:\n nictype: bridged\n parent: lxdbr1\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: exposed\nused_by: []\n\n\n\n\nAssign Containers to Profiles and configure them to connect correctly.\n\n\nThere are a lot of different ways that a Linux instance can solicit network services. So for\nnow I will just describe a method that will work here for a lxc container from ubuntu:16.04, as\nwell as a debian stretch container from images.linuxcontainers.org.\n\n\nStart a new container and assign the profile. We'll use an arbitrary whimsical container name,\n\nquick-joey\n. This process is the same for either the \nnatted\n profile or the \nexposed\n profile.\n\n\nlxc init ubuntu:16.04 quick-joey\n# assign the profile\nlxc profile assign quick-joey exposed\n# start quick-joey\nlxc start quick-joey\n# and start a bash shell\nlxc exec quick-joey bash\n\n\n\n\nWith either an ubuntu:16.04 container, or a debian stretch container, for either the \nnatted\n or\n\nexposed\n profile, because of all the above configuration work they will automatically connect on\ntheir \neth0\n interfaces and be able to talk to the internet. You need to edit \n/etc/network/interfaces\n,\nthe main difference being what that file looks like before you edit it.\n\n\nYou need to tell these containers how to connect to the non-natted subnet on \neth1\n.\n\n\nubuntu:16.04\n\n\nIf you start a shell on an ubuntu:16.04 container, you see that \n/etc/network/interfaces\n\ndescribes the loopback device for localhost, then sources \n/etc/network/interfaces.d/*.cfg\n where\nsome magical cloud-config jazz is going on. You just want to add a static ip description for \neth1\n\nto the file \n/etc/network/interfaces\n. And obviously take care that the static ip address you assign is\nunique and on the same subnet with \nlxdbr1\n.\n\n\nReminder: the address for \nlxdbr1\n is 10.151.18.1/24, (but it will be different on your machine).\n\n\nauto lo\niface lo inet loopback\n\nsource /etc/network/interfaces.d/*.cfg\n# what you add goes below here\nauto eth1\niface eth1 inet static\n address 10.151.18.123\n netmask 255.255.255.0\n broadcast 255.255.255.255 \n network 10.151.18.0\n\n\n\n\nubuntu:16.04 using only dhcp for two nics\n\n\nSo the example here is tested with eth0 and eth1 connected to\nbr0 and lxdbr1 respectively. You need post-up hooks for both eth0 and\neth1 inside the containers, in order to specify the default route, eth0 gets it's configuration\ndynamically by default from cloud-init. So disable cloud-init by\ncreating the following file on the container.\n\n\n# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg\nnetwork: {config: disabled}\n\n\n\n\nThen, on the container describe the interfaces.\n\n\n# /etc/network/interfaces\nauto lo\niface lo inet loopback\n\nauto eth1\niface eth1 inet dhcp\npost-up route del default dev eth1\n\nauto eth0\niface eth0 inet dhcp\npost-up route add default dev eth0 via 192.168.1.1\n\n\n\n\nand delete /etc/network/interfaces.d/50-cloud-init.cfg\n\n\nrm /etc/network/interfaces.d/50-cloud-init.cfg\n\n\n\n\nThe advantage to this scenario is now you can make copies of the container\nwithout having to update the network descriptions, because both interfaces\nwill solicit addresses via dhcp.\n\n\ndebian stretch\n\n\nThe configuration for a debian stretch container is the same, except the the file\n\n/etc/network/interfaces\n will also describe eth0, but you only have to add the \ndescription for eth1.\n\n\nsystemd-networkd\n\n\nThis seems to work.\n\n\n# eth0.network\n[Match]\nName=eth0\n\n[Network]\nDHCP=ipv4\n\n\n\n\n# eth1.network\n[Match]\nName=eth1\n\n[Network]\nDHCP=ipv4\n\n[DHCP]\nUseRoutes=false\n\n\n\n\nthe /etc/hosts file\n\n\nOnce you assign the containers static ip addresses for their \neth1\n\ninterfaces, you can use the \n/etc/hosts\n file on each container to make them\naware of where the other containers and the host machine are.\n\n\nFor instance, if you want the container \nquick-joey\n to talk directly\nto the host machine, which will be at the ip address of \nlxdbr1\n, start a shell\non the container \nquick-joey\n\n\nlxc exec quick-joey bash\n\n\n\n\nand edit \n/etc/hosts\n\n\n# /etc/hosts\n10.151.18.1 mothership\n\n\n\n\nOr you have a container named \nfat-cinderella\n, that needs to be able to talk\ndirectly \nquick-joey\n.\n\n\nlxc exec fat-cinderella bash\nvim /etc/hosts\n\n\n\n\n# /etc/hosts\n10.151.18.123 quick-joey\n\n\n\n\netcetera",
"title": "LXD Container Home Server Networking For Dummies"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#lxd-container-home-server-networking-for-dummies",
"text": "",
"title": "LXD Container Home Server Networking For Dummies"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#why",
"text": "If you're going to operate a fleet of LXD containers for home\nentertainment, you probably want some of them exposed with their\nown ip addresses on your home network, so that you can use them\nas containerized servers for various applications. Others containers, you might want to be inaccessable from the lan,\nin a natted subnet, where they can solicit connections to the\noutside world from within their natted subnet, but are not addressable\nfrom the outside. A database server that you connect a web app to, for\ninstance, or a web app that you have a reverse proxy in front of. But these are two separate address spaces, so ideally all of the containers\nwould have a second interface of their own, by which they could connect\nto a third network, that would be a private network that all of the containers\ncan use to talk directly to each other (or the host machine). It's pretty straightforward, you just have to glue all the pieces together.",
"title": "Why?"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#three-part-overview",
"text": "Define and create some bridges. Define profiles that combine the network\ninterfaces in different combinations. In addition to two\nbridges you will have a macvlan with which to expose the containers\nthat you want exposed, but the macvlan doesn't come into\nplay until here in step two when you define profiles. Assign each container which profile it should use,\nand then configure the containers to use the included\nnetwork interfaces correctly.",
"title": "Three Part Overview."
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#build-sum-moar-bridges",
"text": "The containers will all have two network interfaces from\ntheir own internal point of view, eth0 and eth1 . In this\nscheme we create a bridge for a natted subnet and a bridge for\na non-natted subnet. All of the containers will connect to the\nnon-natted subnet on their second interface, eth1 , and some\nof the containers will connect to the natted subnet on their \nfirst interface eth0 . The containers that don't connect\nto the natted subnet will instead connect to a macvlan\non their first interface eth0 , but that isn't part of this\nstep.",
"title": "Build Sum Moar Bridges"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#bridge-for-a-natted-subnet",
"text": "If you haven't used lxd before, you'll want to run the command lxd init .\nBy default this creates exactly the bridge we want, called lxdbr0 . Otherwise you would use the following command to create lxdbr0 . lxc network create lxdbr0 To generate a table of all the existing interfaces. lxd network list This bridge is for our natted subnet, so we just want to go with\nthe default configuration. lxc network show lxdbr0 This cats a yaml file where you can see the randomly\ngenerated network for lxdbr0 . config:\n ipv4.address: 10.99.153.1/24\n ipv4.nat: \"true\"\n ipv6.address: fd42:211e:e008:954b::1/64\n ipv6.nat: \"true\"\ndescription: \"\"\nname: lxdbr0\ntype: bridge\nused_by: []\nmanaged: true",
"title": "bridge for a natted subnet"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#bridge-for-a-non-natted-subnet",
"text": "Create lxdbr1 lxc network create lxdbr1 Use the following commands to remove nat from \nlxdbr1. lxc network set lxdbr1 ipv4.nat false\nlxc network set lxdbr1 ipv6.nat false Of if you use this next command, your favourite\ntext editor will pop open, preloaded with the complete yaml file\nand you can edit the configuration there. lxc network edit lxdbr1 Either way you're looking for a result such as the following.\nNotice that the randomly generated address space is different\nthat the one for lxdbr0 , and that the *nat keys are set\nto \"false\". config:\n ipv4.address: 10.151.18.1/24\n ipv4.nat: \"false\"\n ipv6.address: fd42:89d4:f465:1b20::1/64\n ipv6.nat: \"false\"\ndescription: \"\"\nname: lxdbr1\ntype: bridge\nused_by: []\nmanaged: true",
"title": "bridge for a non-natted subnet"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#profiles",
"text": "",
"title": "Profiles"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#recycle-the-default",
"text": "When you first ran lxd init , that created a default profile.\nConfirm with the following. lxc profile list To see what the default profile looks like. lxc profile show default config:\n environment.http_proxy: \"\"\n security.privileged: \"true\"\n user.network_mode: \"\"\ndescription: Default LXD profile\ndevices:\n eth0:\n nictype: bridged\n parent: lxdbr0\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: default\nused_by: []",
"title": "recycle the default"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#profile-the-natted",
"text": "The easiest way to create a new profile is start by copying another one. lxc profile copy default natted edit the new natted profile lxc profile edit natted And add an eth1 interface attached to lxdbr1 . eth0 and eth1 will\nbe the interfaces visible from the container's point of view. config:\n environment.http_proxy: \"\"\n security.privileged: \"true\"\n user.network_mode: \"\"\ndescription: Natted LXD profile\ndevices:\n eth0:\n nictype: bridged\n parent: lxdbr0\n type: nic\n eth1:\n nictype: bridged\n parent: lxdbr1\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: natted\nused_by: [] Any container assigned to the natted profile, will have an interface eth0 connected\nto a natted subnet, and a second interface eth1 connected to a non-natted subnet, with\na static ip on which it will be able to talk directly to the other containers and the host\nmachine.",
"title": "profile the natted"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#profile-the-exposed",
"text": "Create the exposed profile lxc profile copy natted exposed and edit the new exposed profile lxc profile edit exposed change the nictype for eth0 from bridged to macvlan , and the parent should be\nthe name of the physical ethernet connection on the host machine, instead of a bridge. config:\n environment.http_proxy: \"\"\n security.privileged: \"true\"\n user.network_mode: \"\"\ndescription: Exposed LXD profile\ndevices:\n eth0:\n nictype: macvlan\n parent: eno1\n type: nic\n eth1:\n nictype: bridged\n parent: lxdbr1\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: exposed\nused_by: [] Any container assigned to the exposed profile, will have an interface eth0 connected\nto a macvlan, addressable from your lan, just like any other arbitrary computer on\nyour home network, and a second interface eth1 connected to a non-natted subnet, with\na static ip on which it will be able to talk directly to the other containers and the host\nmachine.",
"title": "profile the exposed"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#exposed-profile-with-a-regular-linux-br0-interface-bridge",
"text": "You can configure an Ubuntu server with a br0 interface # /etc/network/interfaces\nauto lo\niface lo inet loopback\n\n# br0 bridge in dhcp configuration with ethernet\n# port ens2 added to it.\nauto br0\niface br0 inet dhcp\n bridge_ports ens2\n bridge_stp off\n bridge_maxwait 0 and a cooresponding profile.... config: {}\ndescription: exposed LXD profile\ndevices:\n eth0:\n nictype: bridged\n parent: br0\n type: nic\n eth1:\n nictype: bridged\n parent: lxdbr1\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: exposed\nused_by: []",
"title": "exposed profile with a regular linux br0 interface bridge"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#assign-containers-to-profiles-and-configure-them-to-connect-correctly",
"text": "There are a lot of different ways that a Linux instance can solicit network services. So for\nnow I will just describe a method that will work here for a lxc container from ubuntu:16.04, as\nwell as a debian stretch container from images.linuxcontainers.org. Start a new container and assign the profile. We'll use an arbitrary whimsical container name, quick-joey . This process is the same for either the natted profile or the exposed profile. lxc init ubuntu:16.04 quick-joey\n# assign the profile\nlxc profile assign quick-joey exposed\n# start quick-joey\nlxc start quick-joey\n# and start a bash shell\nlxc exec quick-joey bash With either an ubuntu:16.04 container, or a debian stretch container, for either the natted or exposed profile, because of all the above configuration work they will automatically connect on\ntheir eth0 interfaces and be able to talk to the internet. You need to edit /etc/network/interfaces ,\nthe main difference being what that file looks like before you edit it. You need to tell these containers how to connect to the non-natted subnet on eth1 .",
"title": "Assign Containers to Profiles and configure them to connect correctly."
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#ubuntu1604",
"text": "If you start a shell on an ubuntu:16.04 container, you see that /etc/network/interfaces \ndescribes the loopback device for localhost, then sources /etc/network/interfaces.d/*.cfg where\nsome magical cloud-config jazz is going on. You just want to add a static ip description for eth1 \nto the file /etc/network/interfaces . And obviously take care that the static ip address you assign is\nunique and on the same subnet with lxdbr1 . Reminder: the address for lxdbr1 is 10.151.18.1/24, (but it will be different on your machine). auto lo\niface lo inet loopback\n\nsource /etc/network/interfaces.d/*.cfg\n# what you add goes below here\nauto eth1\niface eth1 inet static\n address 10.151.18.123\n netmask 255.255.255.0\n broadcast 255.255.255.255 \n network 10.151.18.0",
"title": "ubuntu:16.04"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#ubuntu1604-using-only-dhcp-for-two-nics",
"text": "So the example here is tested with eth0 and eth1 connected to\nbr0 and lxdbr1 respectively. You need post-up hooks for both eth0 and\neth1 inside the containers, in order to specify the default route, eth0 gets it's configuration\ndynamically by default from cloud-init. So disable cloud-init by\ncreating the following file on the container. # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg\nnetwork: {config: disabled} Then, on the container describe the interfaces. # /etc/network/interfaces\nauto lo\niface lo inet loopback\n\nauto eth1\niface eth1 inet dhcp\npost-up route del default dev eth1\n\nauto eth0\niface eth0 inet dhcp\npost-up route add default dev eth0 via 192.168.1.1 and delete /etc/network/interfaces.d/50-cloud-init.cfg rm /etc/network/interfaces.d/50-cloud-init.cfg The advantage to this scenario is now you can make copies of the container\nwithout having to update the network descriptions, because both interfaces\nwill solicit addresses via dhcp.",
"title": "ubuntu:16.04 using only dhcp for two nics"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#debian-stretch",
"text": "The configuration for a debian stretch container is the same, except the the file /etc/network/interfaces will also describe eth0, but you only have to add the \ndescription for eth1.",
"title": "debian stretch"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#systemd-networkd",
"text": "This seems to work. # eth0.network\n[Match]\nName=eth0\n\n[Network]\nDHCP=ipv4 # eth1.network\n[Match]\nName=eth1\n\n[Network]\nDHCP=ipv4\n\n[DHCP]\nUseRoutes=false",
"title": "systemd-networkd"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#the-etchosts-file",
"text": "Once you assign the containers static ip addresses for their eth1 \ninterfaces, you can use the /etc/hosts file on each container to make them\naware of where the other containers and the host machine are. For instance, if you want the container quick-joey to talk directly\nto the host machine, which will be at the ip address of lxdbr1 , start a shell\non the container quick-joey lxc exec quick-joey bash and edit /etc/hosts # /etc/hosts\n10.151.18.1 mothership Or you have a container named fat-cinderella , that needs to be able to talk\ndirectly quick-joey . lxc exec fat-cinderella bash\nvim /etc/hosts # /etc/hosts\n10.151.18.123 quick-joey etcetera",
"title": "the /etc/hosts file"
},
{
"location": "/lxd_container_foo/",
"text": "More Notes and Tips for Using LXD\n\n\nLXD Server and \nClients\n\n\nYour LXD hosts can establish a secure client-server relationship very quickly and easily.\n\n\n# First enable networking on Both the server and client:\nlxc config set core.https_address [::]:8443\n\n# Then on the server, set a password:\nlxc config set core.trust_password <something-secure>\n\n# Then on the client, add the server as a remote,\n# and enter the password you just created for it:\nlxc remote add <server> <ip address>\n\n\n\n\nNow from the perspective of the client machine, the server is just another remote, same as:\n\n\n\n\nlocal\n (the default)\n\n\nubuntu\n (where ubuntu images come from), and \n\n\nimages\n (where lxc images of other distros come from).\n\n\nclyde\n (your host named clyde)\n\n\n\n\n# command to list remotes\n# returns local, images, ubuntu, etc.\nlxc remote list\n\n# command to list containers on a remote\nlxc list <remote>:\n# i.e. for a remote named \"black\"\nlxc list black:\n\n# command to list images on a remote\n# i.e. for a remote name \"images\"\nlxc image list images:\n# or for a remote named \"ubuntu\"\nlxc image list ubuntu:\n# or for a specific image\nlxc image list ubuntu:16.04 # or\nlxc image list ubuntu:fdceb4d263b9\n\n\n\n\nNow you can move containers around between servers and clients.\n\n\n# launch an ubuntu container from the ubuntu remote\nlxc launch ubuntu:16.04 <optional name>\n# or from a remote named \"black\"\nlxc launch black:069b95ed3a60 <optional name>\n# to list the images that black has available\nlxc image list black:\n\n# copy a container from a server named \"black\"\n# to your local client\nlxc copy black:jerry <optional name to copy to>\n# or from \"local\" back to \"black\"\nlxc copy jerry black:<optional name to copy to>\n# or move\nlxc move black:jerry <optional name to copy to>\n\n# or change the default remote from \"local\" to \"black\"\nlxc remote set-default black\n# and then reverse the syntax\n# copy a container from a server named \"black\"\n# to your local client\nlxc copy jerry local:<optional name to copy to>\n# or from \"local\" back to \"black\"\nlxc copy local:jerry <optional name to copy to>\n\n\n\n\nOr remote control another LXD server\n\n\n# bash shell on container named \"jim\" running on\n# a remote server named \"black\"\nlxc exec black:jim bash\n\n# copy that\nlxc copy black:jim black:francine\n\n# snapshot\nlxc snapshot black:jim\n\n# delete a snapshot from a remote container\n# first get the containers info to see what\n# snapshots it has\nlxc info black:jim\n# and then delete\nlxc delete black:jim/snap0\n\n# or rollback/restore,\n# slightly different syntax vs \"delete\"\nlxc restore black:jim snap0",
"title": "LXD Container Foo"
},
{
"location": "/lxd_container_foo/#more-notes-and-tips-for-using-lxd",
"text": "",
"title": "More Notes and Tips for Using LXD"
},
{
"location": "/lxd_container_foo/#lxd-server-and-clients",
"text": "",
"title": "LXD Server and Clients"
},
{
"location": "/lxd_container_foo/#your-lxd-hosts-can-establish-a-secure-client-server-relationship-very-quickly-and-easily",
"text": "# First enable networking on Both the server and client:\nlxc config set core.https_address [::]:8443\n\n# Then on the server, set a password:\nlxc config set core.trust_password <something-secure>\n\n# Then on the client, add the server as a remote,\n# and enter the password you just created for it:\nlxc remote add <server> <ip address>",
"title": "Your LXD hosts can establish a secure client-server relationship very quickly and easily."
},
{
"location": "/lxd_container_foo/#now-from-the-perspective-of-the-client-machine-the-server-is-just-another-remote-same-as",
"text": "local (the default) ubuntu (where ubuntu images come from), and images (where lxc images of other distros come from). clyde (your host named clyde) # command to list remotes\n# returns local, images, ubuntu, etc.\nlxc remote list\n\n# command to list containers on a remote\nlxc list <remote>:\n# i.e. for a remote named \"black\"\nlxc list black:\n\n# command to list images on a remote\n# i.e. for a remote name \"images\"\nlxc image list images:\n# or for a remote named \"ubuntu\"\nlxc image list ubuntu:\n# or for a specific image\nlxc image list ubuntu:16.04 # or\nlxc image list ubuntu:fdceb4d263b9",
"title": "Now from the perspective of the client machine, the server is just another remote, same as:"
},
{
"location": "/lxd_container_foo/#now-you-can-move-containers-around-between-servers-and-clients",
"text": "# launch an ubuntu container from the ubuntu remote\nlxc launch ubuntu:16.04 <optional name>\n# or from a remote named \"black\"\nlxc launch black:069b95ed3a60 <optional name>\n# to list the images that black has available\nlxc image list black:\n\n# copy a container from a server named \"black\"\n# to your local client\nlxc copy black:jerry <optional name to copy to>\n# or from \"local\" back to \"black\"\nlxc copy jerry black:<optional name to copy to>\n# or move\nlxc move black:jerry <optional name to copy to>\n\n# or change the default remote from \"local\" to \"black\"\nlxc remote set-default black\n# and then reverse the syntax\n# copy a container from a server named \"black\"\n# to your local client\nlxc copy jerry local:<optional name to copy to>\n# or from \"local\" back to \"black\"\nlxc copy local:jerry <optional name to copy to>",
"title": "Now you can move containers around between servers and clients."
},
{
"location": "/lxd_container_foo/#or-remote-control-another-lxd-server",
"text": "# bash shell on container named \"jim\" running on\n# a remote server named \"black\"\nlxc exec black:jim bash\n\n# copy that\nlxc copy black:jim black:francine\n\n# snapshot\nlxc snapshot black:jim\n\n# delete a snapshot from a remote container\n# first get the containers info to see what\n# snapshots it has\nlxc info black:jim\n# and then delete\nlxc delete black:jim/snap0\n\n# or rollback/restore,\n# slightly different syntax vs \"delete\"\nlxc restore black:jim snap0",
"title": "Or remote control another LXD server"
},
{
"location": "/how_to_reassign_a_static_ip_address_with_dnsmasq/",
"text": "How To Reassign a Static ip address with dnsmasq\n\n\nOn your router you can assign static ip addresses for various machines\nin your network, by writing the reservations in the file \n/etc/dnsmasq.conf\n.\n\n\nThese will be in the form as below.\n\n\ndhcp-host=<mac address>,<ip address>\n\n\nSo here's how you transfer an existing static ip address assignment to\na new client machine. Begin by editting the file \n/etc/dnsmasq.conf\n on\nyour router, and update the mac address associated with the intended\nip address.\n\n\nNext, temporarily stop dnsmasq.\n\n\nsystemctl stop dnsmasq\n\n\n\n\nNext shutdown networking on the new client machine. Shutting the machine down might work,\nor the command \ndhclient -v -r\n might get the job done (you will lose the connection).\n\n\nNow on the router, edit the file \n/var/lib/misc/dnsmasq.leases\n, and delete the pre-existing\nlease for the old client machine that will no longer exist.\n\n\nRestart dnsmasq on the router,\n\nand then restart networking on the new client machine.",
"title": "How To Reassign A Static Ip Address with dnsmasq"
},
{
"location": "/how_to_reassign_a_static_ip_address_with_dnsmasq/#how-to-reassign-a-static-ip-address-with-dnsmasq",
"text": "On your router you can assign static ip addresses for various machines\nin your network, by writing the reservations in the file /etc/dnsmasq.conf . These will be in the form as below. dhcp-host=<mac address>,<ip address> So here's how you transfer an existing static ip address assignment to\na new client machine. Begin by editting the file /etc/dnsmasq.conf on\nyour router, and update the mac address associated with the intended\nip address. Next, temporarily stop dnsmasq. systemctl stop dnsmasq Next shutdown networking on the new client machine. Shutting the machine down might work,\nor the command dhclient -v -r might get the job done (you will lose the connection). Now on the router, edit the file /var/lib/misc/dnsmasq.leases , and delete the pre-existing\nlease for the old client machine that will no longer exist. Restart dnsmasq on the router, \nand then restart networking on the new client machine.",
"title": "How To Reassign a Static ip address with dnsmasq"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/",
"text": "Serve And Share Apps From Your Phone With Fdroid\n\n\nThis can speed up the process of updating apps on your devices, especially if fdroid is slow. \n\n\nStep 3: you are born on third base, find the menu item for \nSwap apps\n on phone one\n\n\nOpen fdroid, and navigate to the menu by touching three dots in upper right hand corner of the screen. Select \nSwap apps\n.\n\n\n\n\nStep 4: enable the repo server on phone one\n\n\nOn the next screen toggle on \nVisible via Wi-Fi\n\n\n\n\nStep 5: a small step for your android\n\n\nAt the bottom of the screen select \nSCAN QR CODE\n\n\n\n\nStep 6: choose which apps to serve from phone one\n\n\nAt the next screen \nChoose Apps\n you want to xerve I mean serve and then touch the -> right arrow to proceed\n\n\n\n\nStep 7: another small step for your android\n\n\nTouch the -> right arrow again, do it.\n\n\n\n\nOcho: <- this means step eight\n\n\nTouch the -> right arrow until you are coming here\n\n\n\nNotice you can use either a qr code or a local url, so grab one of your other phones.\n\n\nPrivacy Friendly Qr Scanner\n appears to be a good Qr scanner,\nbut of course you can key in the url by hand too.\n\n\nStep 9: find the menu item for \nRepositories\n on phone two\n\n\nOn your other phone open fdroid, navigate to menu by selecting the 3 dots in the upper right hand corner and choose \nRepositories\n\n\n\n\nStep 10: (temporarily) toggle off the remote repos on phone two\n\n\nToggle all the current repos off and then if you want to key in the new local repo url by hand touch the + plus in the upper right hand corner\n\n\n\n\nStep 11 A: key in the local repo url by hand on phone two\n\n\nAfter touching the + plus button in \nStep Ten\n on phone two, you can fill in the url address that corresponds to the photo in \nOcho\n\n\n\n\nStep 12 A: or scan in the local repo url with qr code on phone two\n\n\nIf you prefer not to key in the url by hand, on phone two touch the\nhome button and then open your qr-scanning application and scan the\nqr code on phone one, as seen in photo \nOcho\n. The qr-scanning\napp will direct you to open fdroid, and your result will be the same as\nthe photo in \nStep Eleven A\n\n\nStep 13: profit from moar faster local downloads\n\n\nOn phone two you can now download and install apps and updates from phone one, and the download speed will be much faster than from the internet.\n\n\n\n\nStep 14: how to remember all this?\n\n\nYou can bookmark.\n\n\nIn fact, you can add a shortcut icon directly to \n\nthis page\n,\non your home screen,\nas seen here with IceCat, a debranded build of the latest extended-support-release\nof FireFox for Android.\n\n\nOr you can clone \nthe git repo\n\nwhich this site automatically builds itself from.",
"title": "Serve And Share Apps From Your Phone With Fdroid"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#serve-and-share-apps-from-your-phone-with-fdroid",
"text": "This can speed up the process of updating apps on your devices, especially if fdroid is slow.",
"title": "Serve And Share Apps From Your Phone With Fdroid"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-3-you-are-born-on-third-base-find-the-menu-item-for-swap-apps-on-phone-one",
"text": "",
"title": "Step 3: you are born on third base, find the menu item for Swap apps on phone one"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#open-fdroid-and-navigate-to-the-menu-by-touching-three-dots-in-upper-right-hand-corner-of-the-screen-select-swap-apps",
"text": "",
"title": "Open fdroid, and navigate to the menu by touching three dots in upper right hand corner of the screen. Select Swap apps."
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-4-enable-the-repo-server-on-phone-one",
"text": "",
"title": "Step 4: enable the repo server on phone one"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#on-the-next-screen-toggle-on-visible-via-wi-fi",
"text": "",
"title": "On the next screen toggle on Visible via Wi-Fi"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-5-a-small-step-for-your-android",
"text": "",
"title": "Step 5: a small step for your android"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#at-the-bottom-of-the-screen-select-scan-qr-code",
"text": "",
"title": "At the bottom of the screen select SCAN QR CODE"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-6-choose-which-apps-to-serve-from-phone-one",
"text": "",
"title": "Step 6: choose which apps to serve from phone one"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#at-the-next-screen-choose-apps-you-want-to-xerve-i-mean-serve-and-then-touch-the-right-arrow-to-proceed",
"text": "",
"title": "At the next screen Choose Apps you want to xerve I mean serve and then touch the -&gt; right arrow to proceed"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-7-another-small-step-for-your-android",
"text": "",
"title": "Step 7: another small step for your android"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#touch-the-right-arrow-again-do-it",
"text": "",
"title": "Touch the -&gt; right arrow again, do it."
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#ocho-this-means-step-eight",
"text": "",
"title": "Ocho: &lt;- this means step eight"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#touch-the-right-arrow-until-you-are-coming-here",
"text": "Notice you can use either a qr code or a local url, so grab one of your other phones. Privacy Friendly Qr Scanner appears to be a good Qr scanner,\nbut of course you can key in the url by hand too.",
"title": "Touch the -&gt; right arrow until you are coming here"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-9-find-the-menu-item-for-repositories-on-phone-two",
"text": "",
"title": "Step 9: find the menu item for Repositories on phone two"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#on-your-other-phone-open-fdroid-navigate-to-menu-by-selecting-the-3-dots-in-the-upper-right-hand-corner-and-choose-repositories",
"text": "",
"title": "On your other phone open fdroid, navigate to menu by selecting the 3 dots in the upper right hand corner and choose Repositories"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-10-temporarily-toggle-off-the-remote-repos-on-phone-two",
"text": "",
"title": "Step 10: (temporarily) toggle off the remote repos on phone two"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#toggle-all-the-current-repos-off-and-then-if-you-want-to-key-in-the-new-local-repo-url-by-hand-touch-the-plus-in-the-upper-right-hand-corner",
"text": "",
"title": "Toggle all the current repos off and then if you want to key in the new local repo url by hand touch the + plus in the upper right hand corner"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-11-a-key-in-the-local-repo-url-by-hand-on-phone-two",
"text": "",
"title": "Step 11 A: key in the local repo url by hand on phone two"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#after-touching-the-plus-button-in-step-ten-on-phone-two-you-can-fill-in-the-url-address-that-corresponds-to-the-photo-in-ocho",
"text": "",
"title": "After touching the + plus button in Step Ten on phone two, you can fill in the url address that corresponds to the photo in Ocho"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-12-a-or-scan-in-the-local-repo-url-with-qr-code-on-phone-two",
"text": "If you prefer not to key in the url by hand, on phone two touch the\nhome button and then open your qr-scanning application and scan the\nqr code on phone one, as seen in photo Ocho . The qr-scanning\napp will direct you to open fdroid, and your result will be the same as\nthe photo in Step Eleven A",
"title": "Step 12 A: or scan in the local repo url with qr code on phone two"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-13-profit-from-moar-faster-local-downloads",
"text": "",
"title": "Step 13: profit from moar faster local downloads"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#on-phone-two-you-can-now-download-and-install-apps-and-updates-from-phone-one-and-the-download-speed-will-be-much-faster-than-from-the-internet",
"text": "",
"title": "On phone two you can now download and install apps and updates from phone one, and the download speed will be much faster than from the internet."
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-14-how-to-remember-all-this",
"text": "",
"title": "Step 14: how to remember all this?"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#you-can-bookmark",
"text": "In fact, you can add a shortcut icon directly to this page ,\non your home screen,\nas seen here with IceCat, a debranded build of the latest extended-support-release\nof FireFox for Android. \nOr you can clone the git repo \nwhich this site automatically builds itself from.",
"title": "You can bookmark."
},
{
"location": "/nspawn/",
"text": "Nspawn Containers\n\n\nThis Link For Arch Linux Wiki for Nspawn Containers\n\n\nI like the idea of starting with the easy containers first.\n\n\nCreate a FileSystem\n\n\ncd /var/lib/machines\n# create a directory\nmkdir <container>\n# use pacstrap to create a file system\npacstrap -i -c -d <container> base --ignore linux\n\n\n\n\nAt this point you might want to copy over some configs to save time later.\n\n\n\n\n/etc/locale.conf\n\n\n/root/.bashrc\n\n\n/etc/locale.gen\n\n\n\n\nFirst boot and create root password\n\n\nsystemd-nspawn -b -D <container>\npasswd\n# assuming you copied over /etc/locale.gen\nlocale-gen\n# set timezone\ntimedatectl set-timezone <timezone>\n# enable network time\ntimedatectl set-ntp 1\n# enable networking\nsystemctl enable systemd-networkd\nsystemctl enable systemd-resolved\npoweroff\n# if you want to nat the container add *-n* flag\nsystemd-nspawn -b -D <container> -n\n# and to bind mount the package cache\nsystemd-nspawn -b -D <container> -n --bind=/var/cache/pacman/pkg\n\n\n\n\nNetworking\n\n\nHere's a link that skips ahead to \nAutomatically Starting the Container\n\n\nOn Arch, assuming you have systemd-networkd and systemd-resolved\nset up correctly, networking from the host end of things should\njust work.\n\nHowever on Linode it does not. What does work on Linode is to create\na bridge interface. Two files for br0 will get the job done.\n\n\n# /etc/systemd/network/50-br0.netdev\n[NetDev]\nName=br0\nKind=bridge\n\n\n\n\n# /etc/systemd/network/50-br0.netdev\n[Match]\nName=br0\n\n[Network]\nAddress=10.0.55.1/24 # arbitrarily pick a subnet range to taste\nDHCPServer=yes\nIPMasquerade=yes\n\n\n\n\nNotice how the configuration file tells systemd-networkd to offer\nDHCP service and to perform masquerade. You can modify the \nsystemd-nspawn\n\ncommand to use the bridge interface. Every container attached to this bridge\nwill be on the same subnet and able to talk to each other.\n\n\n# first restart systemd-networkd to bring up the new bridge interface\nsystemctl restart systemd-networkd\n# and add --network-bridge=br0 to systemd-nspawn command\nsystemd-nspawn -b -D <container> --network-bridge=br0 --bind=/var/cache/pacman/pkg\n\n\n\n\nAutomatically Starting the Container\n\n\nHere's a link back up to \nNetworking\n\nin case you previously skipped ahead.\n\n\nThere are two ways to automate starting the container. You can override\n\nsystemd-nspawn@.service\n or create an \nnspawn\n file. \n\n\nFirst enable machines.target\n\n\n# to override the systemd-nspawn@.service file\ncp /lib/systemd/system/systemd-nspawn@.service /etc/systemd/system/systemd-nspawn@<container>.service\n\n\n\n\nEdit \n/etc/systemd/system/systemd-nspawn@<container>.service\n to add the \nsystemd-nspawn\n options\nyou want to the \nExecStart\n command.\n\n\nOr create \n/etc/systemd/nspawn/<container>.nspawn\n\n\n# /etc/systemd/nspawn/<container>.nspawn\n[Files]\nBind=/var/cache/pacman/pkg\n\n[Network]\nBridge=br0\n\n\n\n\n# /etc/systemd/nspawn/<container>.nspawn\n[Files]\nBind=/var/cache/pacman/pkg\n\n[Network]\nVirtualEthernet=1 # this seems to be the default sometimes, though\n\n\n\n\n# in either case\nsystemctl start/enable systemd-nspawn@<container>\n# to get a shell\nmachinectl shell <container>\n# and then to get an environment\nbash\n\n\n\n\nThis would be a good time to check for network and name resolution,\nsymlink resolv.conf if need be.\n\n\nInitial Configuration Inside The Container\n\n\n# set time zone if you don't want UTC\ntimedatectl set-timezone <timezone>\n# enable ntp, networktime\ntimedatectl set-ntp 1\n# enable networking from inside the container\nsystemctl enable systemd-networkd\nsystemctl start systemd-networkd\nsystemctl enable systemd-resolved\nsystemctl start systemd-resolved\nrm /etc/resolv.conf \nln -s /run/systemd/resolve/resolv.conf /etc/\n# ping google\nping -c 3 google.com\n\n\n\n\nIf you want to change the locale\n\n\nFinal Observations\n\n\n\n\nYou can start/stop nspawn containers with \nmachinectl\n command. \n\n\nYou can start nspawn containers with \nsystemd-nspawn\n command.\n\n\nYou can configure the systemd service for a container with @nspawn.service file override\n\n\nOr you can configure an nspawn container with a dot.nspawn file\n\n\n\n\nBut in regards to the above list\nI have noticed differences in behaviour,\nin some scenarios, concerning file attributes\nfor bind mounts.\n\n\nAnother curiosity: when you have nspawn containers natted on VirtualEthernet connections,\nthey might be able to ping each other at 10.x.y.z, but not resolve each other. But they might\nbe able to resolve each other if they are all connected to the same bridge interface or nspawn\nnetwork zone, but will randomly resolve each other in any of the 10.x.y.z, 169.x.y.z,\nor fe80::....:....:....%host (ipv6 local) spaces, which would complicate configuring the containers\nto talk to each other. But I intend to look into this some more.",
"title": "Nspawn"
},
{
"location": "/nspawn/#nspawn-containers",
"text": "This Link For Arch Linux Wiki for Nspawn Containers I like the idea of starting with the easy containers first.",
"title": "Nspawn Containers"
},
{
"location": "/nspawn/#create-a-filesystem",
"text": "cd /var/lib/machines\n# create a directory\nmkdir <container>\n# use pacstrap to create a file system\npacstrap -i -c -d <container> base --ignore linux At this point you might want to copy over some configs to save time later. /etc/locale.conf /root/.bashrc /etc/locale.gen",
"title": "Create a FileSystem"
},
{
"location": "/nspawn/#first-boot-and-create-root-password",
"text": "systemd-nspawn -b -D <container>\npasswd\n# assuming you copied over /etc/locale.gen\nlocale-gen\n# set timezone\ntimedatectl set-timezone <timezone>\n# enable network time\ntimedatectl set-ntp 1\n# enable networking\nsystemctl enable systemd-networkd\nsystemctl enable systemd-resolved\npoweroff\n# if you want to nat the container add *-n* flag\nsystemd-nspawn -b -D <container> -n\n# and to bind mount the package cache\nsystemd-nspawn -b -D <container> -n --bind=/var/cache/pacman/pkg",
"title": "First boot and create root password"
},
{
"location": "/nspawn/#networking",
"text": "Here's a link that skips ahead to Automatically Starting the Container On Arch, assuming you have systemd-networkd and systemd-resolved\nset up correctly, networking from the host end of things should\njust work. \nHowever on Linode it does not. What does work on Linode is to create\na bridge interface. Two files for br0 will get the job done. # /etc/systemd/network/50-br0.netdev\n[NetDev]\nName=br0\nKind=bridge # /etc/systemd/network/50-br0.netdev\n[Match]\nName=br0\n\n[Network]\nAddress=10.0.55.1/24 # arbitrarily pick a subnet range to taste\nDHCPServer=yes\nIPMasquerade=yes Notice how the configuration file tells systemd-networkd to offer\nDHCP service and to perform masquerade. You can modify the systemd-nspawn \ncommand to use the bridge interface. Every container attached to this bridge\nwill be on the same subnet and able to talk to each other. # first restart systemd-networkd to bring up the new bridge interface\nsystemctl restart systemd-networkd\n# and add --network-bridge=br0 to systemd-nspawn command\nsystemd-nspawn -b -D <container> --network-bridge=br0 --bind=/var/cache/pacman/pkg",
"title": "Networking"
},
{
"location": "/nspawn/#automatically-starting-the-container",
"text": "Here's a link back up to Networking \nin case you previously skipped ahead. There are two ways to automate starting the container. You can override systemd-nspawn@.service or create an nspawn file. First enable machines.target # to override the systemd-nspawn@.service file\ncp /lib/systemd/system/systemd-nspawn@.service /etc/systemd/system/systemd-nspawn@<container>.service Edit /etc/systemd/system/systemd-nspawn@<container>.service to add the systemd-nspawn options\nyou want to the ExecStart command. Or create /etc/systemd/nspawn/<container>.nspawn # /etc/systemd/nspawn/<container>.nspawn\n[Files]\nBind=/var/cache/pacman/pkg\n\n[Network]\nBridge=br0 # /etc/systemd/nspawn/<container>.nspawn\n[Files]\nBind=/var/cache/pacman/pkg\n\n[Network]\nVirtualEthernet=1 # this seems to be the default sometimes, though # in either case\nsystemctl start/enable systemd-nspawn@<container>\n# to get a shell\nmachinectl shell <container>\n# and then to get an environment\nbash This would be a good time to check for network and name resolution,\nsymlink resolv.conf if need be.",
"title": "Automatically Starting the Container"
},
{
"location": "/nspawn/#initial-configuration-inside-the-container",
"text": "# set time zone if you don't want UTC\ntimedatectl set-timezone <timezone>\n# enable ntp, networktime\ntimedatectl set-ntp 1\n# enable networking from inside the container\nsystemctl enable systemd-networkd\nsystemctl start systemd-networkd\nsystemctl enable systemd-resolved\nsystemctl start systemd-resolved\nrm /etc/resolv.conf \nln -s /run/systemd/resolve/resolv.conf /etc/\n# ping google\nping -c 3 google.com If you want to change the locale",
"title": "Initial Configuration Inside The Container"
},
{
"location": "/nspawn/#final-observations",
"text": "You can start/stop nspawn containers with machinectl command. You can start nspawn containers with systemd-nspawn command. You can configure the systemd service for a container with @nspawn.service file override Or you can configure an nspawn container with a dot.nspawn file But in regards to the above list\nI have noticed differences in behaviour,\nin some scenarios, concerning file attributes\nfor bind mounts. Another curiosity: when you have nspawn containers natted on VirtualEthernet connections,\nthey might be able to ping each other at 10.x.y.z, but not resolve each other. But they might\nbe able to resolve each other if they are all connected to the same bridge interface or nspawn\nnetwork zone, but will randomly resolve each other in any of the 10.x.y.z, 169.x.y.z,\nor fe80::....:....:....%host (ipv6 local) spaces, which would complicate configuring the containers\nto talk to each other. But I intend to look into this some more.",
"title": "Final Observations"
},
{
"location": "/gentoo_lxd_container/",
"text": "Gentoo LXD Container\n\n\nThere are Gentoo images at \nlinuxcontainers.org\n\n\nlxc image list images: | grep gentoo\nlxc init images:34760012759f\n# or\nlxc init images:34760012759f <pick a name>\n\n\n\n\nNetworking\n\n\nThe default image will request dhcp service on eth0. If you need a second static\nconnection on eth1, do the following. Describe eth1 in \n/etc/conf.d/net\n\n\n# /etc/conf.d/net\nconfig_eth1=\"10.44.84.101 netmask 255.255.255.0 brd 10.44.84.255\"\nroutes_eth1=\"default via 10.44.84.1\"\n\n\n\n\nThen in \n/etc/init.d/\n\n\nln -s /etc/init.d/net.lo /etc/init.d/net.eth1\n\n\n\n\nEnable net.eth1 in init.\n\n\nrc-update add net.eth1 default\n\n\n\n\nAnd then start networking on eth1\n\n\n/etc/init.d/net.eth1 start\n\n\n\n\nLocale and Timezone\n\n\nYou're supposed to write your timezone in \n/etc/timezone\n, \necho \"Europe/Brussels\" > /etc/timezone\n,\nand then run the command \nemerge --config sys-libs/timezone-data\n. But this doesn't work.\n\n\nYou can set the locale by uncommenting your locale in \n/etc/locale-gen\n, and then\nrunning the following commands.\n\n\nlocale-gen\neselect locale list\neselect locale set <number>\n. /etc/profile\n\n\n\n\nAnd the following corrected the timezone.\n\n\nunlink /etc/localtime\nln -s /usr/share/zoneinfo/America/Los_Angeles /etc/localtime",
"title": "Gentoo LXD Container"
},
{
"location": "/gentoo_lxd_container/#gentoo-lxd-container",
"text": "",
"title": "Gentoo LXD Container"
},
{
"location": "/gentoo_lxd_container/#there-are-gentoo-images-at-linuxcontainersorg",
"text": "lxc image list images: | grep gentoo\nlxc init images:34760012759f\n# or\nlxc init images:34760012759f <pick a name>",
"title": "There are Gentoo images at linuxcontainers.org"
},
{
"location": "/gentoo_lxd_container/#networking",
"text": "The default image will request dhcp service on eth0. If you need a second static\nconnection on eth1, do the following. Describe eth1 in /etc/conf.d/net # /etc/conf.d/net\nconfig_eth1=\"10.44.84.101 netmask 255.255.255.0 brd 10.44.84.255\"\nroutes_eth1=\"default via 10.44.84.1\" Then in /etc/init.d/ ln -s /etc/init.d/net.lo /etc/init.d/net.eth1 Enable net.eth1 in init. rc-update add net.eth1 default And then start networking on eth1 /etc/init.d/net.eth1 start",
"title": "Networking"
},
{
"location": "/gentoo_lxd_container/#locale-and-timezone",
"text": "You're supposed to write your timezone in /etc/timezone , echo \"Europe/Brussels\" > /etc/timezone ,\nand then run the command emerge --config sys-libs/timezone-data . But this doesn't work. You can set the locale by uncommenting your locale in /etc/locale-gen , and then\nrunning the following commands. locale-gen\neselect locale list\neselect locale set <number>\n. /etc/profile And the following corrected the timezone. unlink /etc/localtime\nln -s /usr/share/zoneinfo/America/Los_Angeles /etc/localtime",
"title": "Locale and Timezone"
},
{
"location": "/mastodon_on_arch/",
"text": "Some Observations About Installing Mastodon on Arch.\n\n\nNginx\n\n\nFrom the \nProduction Guide\n\nyou can copy the example nginx.conf file to \n/etc/nginx/sites-enabled/some_arbitrary.conf\n,\nand then add the following to \n/etc/nginx/nginx.conf\n in the http section,\nthis with a fresh install of nginx with the default configuration file.\n\n\n# /etc/nginx/nginx.conf \nhttp {\n include sites-enabled/*;\n}\n\n\n\n\nInstalling the Dependancies\n\n\npacman -S certbot nginx libxml2 imagemagick ffmpeg git yarn npm python2 oidentd\n\n\n\n\n# I'm guessing here\npacman -S libpqxx libxslt protobuf protobuf-c\n\n\n\n\n\n\nI'm assuming base-devel is installed\n\n\npython2 seems to be required to run \nyarn install\n command later on\n\n\noidentd seems to be a usable replacement for pident\n\n\nlibpqxx pulls in postgresql-libs\n\n\nfile is already installed\n\n\ncurl is already installed\n\n\nruby-build and rbenv are installable from aur\n\n\nalso postgresql and redis unless, those are in another container or whatever.\n\n\n\n\nOther Observations\n\n\nI discovered that between \ngem install bundler\n and\n\n\nbundle install --deployment --without development test\n,\nyou have to update your environment, with \n\neval \"$(rbenv init -)\"\n, i.e.\n\n\necho 'eval \"$(rbenv init -)\"' >> .bashrc\n# and then\n. ~/.bashrc\n\n\n\n\nYou have to update your environment more than once, during the\ninstallation.\n\n\nPresumably you don't ever want to delete the \n~/live/Public/\n directory\nif that is where assets are being stored, but it seems ok to delete \n\n~/live/node_modules\n and then rerun the \nyarn install\n command.\n\n\nIn \n~/live/.env.production\n, \nSINGLE_USER_MODE=false\n has to be set\nto \nfalse\n until at least one user is created, or the web service won't \neven start. (Also \nchmod 755 ~/\n)\n\n\nThe Different Documentation for Updating\n\n\nUpdating Guide\n\nI really think that when you update, you're going to want to read through the installation guide,\nthen compare it to the older version, then read through the upgrade guide. And finally, I think\nyou want to really comb through the \nUpgrade notes\n in the\n\nRelease Notes\n\n\nInstallation Guide\n\n(bare metal)\n\nYou may also find this\n\nOlder Installation Guide\n\nuseful for reference.",
"title": "Mastodon on Arch"
},
{
"location": "/mastodon_on_arch/#some-observations-about-installing-mastodon-on-arch",
"text": "",
"title": "Some Observations About Installing Mastodon on Arch."
},
{
"location": "/mastodon_on_arch/#nginx",
"text": "From the Production Guide \nyou can copy the example nginx.conf file to /etc/nginx/sites-enabled/some_arbitrary.conf ,\nand then add the following to /etc/nginx/nginx.conf in the http section,\nthis with a fresh install of nginx with the default configuration file. # /etc/nginx/nginx.conf \nhttp {\n include sites-enabled/*;\n}",
"title": "Nginx"
},
{
"location": "/mastodon_on_arch/#installing-the-dependancies",
"text": "pacman -S certbot nginx libxml2 imagemagick ffmpeg git yarn npm python2 oidentd # I'm guessing here\npacman -S libpqxx libxslt protobuf protobuf-c I'm assuming base-devel is installed python2 seems to be required to run yarn install command later on oidentd seems to be a usable replacement for pident libpqxx pulls in postgresql-libs file is already installed curl is already installed ruby-build and rbenv are installable from aur also postgresql and redis unless, those are in another container or whatever.",
"title": "Installing the Dependancies"
},
{
"location": "/mastodon_on_arch/#other-observations",
"text": "I discovered that between gem install bundler and bundle install --deployment --without development test ,\nyou have to update your environment, with eval \"$(rbenv init -)\" , i.e. echo 'eval \"$(rbenv init -)\"' >> .bashrc\n# and then\n. ~/.bashrc You have to update your environment more than once, during the\ninstallation. Presumably you don't ever want to delete the ~/live/Public/ directory\nif that is where assets are being stored, but it seems ok to delete ~/live/node_modules and then rerun the yarn install command. In ~/live/.env.production , SINGLE_USER_MODE=false has to be set\nto false until at least one user is created, or the web service won't \neven start. (Also chmod 755 ~/ )",
"title": "Other Observations"
},
{
"location": "/mastodon_on_arch/#the-different-documentation-for-updating",
"text": "Updating Guide \nI really think that when you update, you're going to want to read through the installation guide,\nthen compare it to the older version, then read through the upgrade guide. And finally, I think\nyou want to really comb through the Upgrade notes in the Release Notes Installation Guide \n(bare metal) \nYou may also find this Older Installation Guide \nuseful for reference.",
"title": "The Different Documentation for Updating"
},
{
"location": "/debian_nspawn_container_on_arch_for_testing_apache_configurations/",
"text": "Debian Nspawn Container On Arch For Testing Apache Configurations\n\n\nBegin by exporting the environmental variable for your squid cacheing \nproxy. If you're deboostrapping Debian file systems, the best way to\nspeed this up is with squid.\n\n\nThe ArchWiki page for nspawn containers has a\n\nDebian/Ubuntu subsection\n\nObviously you're going to want to install debootstrap and debian-archive-keyring.\n\n\n# to create a Stretch Container\ncd /var/lib/machines \nmkdir <container name> \ndeboostrap stretch <container name>\n\n\n\n\nAfter some experimentation, perhaps this is the best time to write\nthe intended hostname into the container, and write any\napt-cacher or apt-cacher-ng proxies into /etc/apt/apt.conf \non the container.\n\n\ncp apt.conf /etc/apt/apt.conf \necho \"<hostname>\" > /var/lib/machines/<container name>/etc/hostname\n\n\n\n\nAnd then start the container, and set the root password.\n\n\n# boot in interactive mode\nsystemd-nspawn -D <container name>\n# set the passwd and logout\npassword \nlogout \n\n\n\n\nNow we can boot the container in non-interactive mode, either\nfrom the command line or using nspawn files. In either case \ndouble check that the your bind mounts have the correct permissions \nfrom inside the container.\n\n\n# for instance attached to a bridge interface br0 \nsystemd-nspawn -b -D <container name> --network-bridge=br0\n# or if you've set up a package cache \nsystemd-nspawn -b -D <container name> --network-bridge=br0 --bind=/var/cache/apt/archives\n\n\n\n\nAlternately, if you use an nspawn file, then you can use a command \nsimilar to the following to start it, you'll first need to \nboot the container from the command line and install dbus,\nbecause \nmachinectl shell\n and \nmachinectl login\n won't work \nwithout dbus. In this case use the following sequence of commands.\n\n\n# start the container and login as root\nsystemd-nspawn -b -D <container name> --network-bridge=br0 \n# bring up networking so you can install dbus\nsystemctl enable/start systemd-networkd\n# this is also a good time to install and configure locale\napt install dbus locales \n# to configure locale \ndpkg-reconfigure locales \npoweroff\n\n\n\n\nAfter this you can start the container with systemd, when \nusing an nspawn file.\n\n\nsystemctl start systemd-nspawn@<container name>\n\n\n\n\n# /etc/systemd/nspawn/<container name>.spawn \n[Files] \n# Bind=/var/cache/apt/archives \n\n[Network] \nbridge=br0 \n\n\n\n\nYou can use tasksel to install a web-server.\n\n\n# apache2 will immediately be listening on port 80\ntasksel install web-server\n# enable mod ssl\na2enmod ssl ; systemctl restart apache2\n# enable the default ssl test page \na2ensite default-ssl.conf ; systemctl reload apache2\n\n\n\n\nYou'll be up and running with the default self-signed certs.",
"title": "Debian Nspawn Container On Arch For Testing Apache Configurations"
},
{
"location": "/debian_nspawn_container_on_arch_for_testing_apache_configurations/#debian-nspawn-container-on-arch-for-testing-apache-configurations",
"text": "Begin by exporting the environmental variable for your squid cacheing \nproxy. If you're deboostrapping Debian file systems, the best way to\nspeed this up is with squid. The ArchWiki page for nspawn containers has a Debian/Ubuntu subsection \nObviously you're going to want to install debootstrap and debian-archive-keyring. # to create a Stretch Container\ncd /var/lib/machines \nmkdir <container name> \ndeboostrap stretch <container name> After some experimentation, perhaps this is the best time to write\nthe intended hostname into the container, and write any\napt-cacher or apt-cacher-ng proxies into /etc/apt/apt.conf \non the container. cp apt.conf /etc/apt/apt.conf \necho \"<hostname>\" > /var/lib/machines/<container name>/etc/hostname And then start the container, and set the root password. # boot in interactive mode\nsystemd-nspawn -D <container name>\n# set the passwd and logout\npassword \nlogout Now we can boot the container in non-interactive mode, either\nfrom the command line or using nspawn files. In either case \ndouble check that the your bind mounts have the correct permissions \nfrom inside the container. # for instance attached to a bridge interface br0 \nsystemd-nspawn -b -D <container name> --network-bridge=br0\n# or if you've set up a package cache \nsystemd-nspawn -b -D <container name> --network-bridge=br0 --bind=/var/cache/apt/archives Alternately, if you use an nspawn file, then you can use a command \nsimilar to the following to start it, you'll first need to \nboot the container from the command line and install dbus,\nbecause machinectl shell and machinectl login won't work \nwithout dbus. In this case use the following sequence of commands. # start the container and login as root\nsystemd-nspawn -b -D <container name> --network-bridge=br0 \n# bring up networking so you can install dbus\nsystemctl enable/start systemd-networkd\n# this is also a good time to install and configure locale\napt install dbus locales \n# to configure locale \ndpkg-reconfigure locales \npoweroff After this you can start the container with systemd, when \nusing an nspawn file. systemctl start systemd-nspawn@<container name> # /etc/systemd/nspawn/<container name>.spawn \n[Files] \n# Bind=/var/cache/apt/archives \n\n[Network] \nbridge=br0 You can use tasksel to install a web-server. # apache2 will immediately be listening on port 80\ntasksel install web-server\n# enable mod ssl\na2enmod ssl ; systemctl restart apache2\n# enable the default ssl test page \na2ensite default-ssl.conf ; systemctl reload apache2 You'll be up and running with the default self-signed certs.",
"title": "Debian Nspawn Container On Arch For Testing Apache Configurations"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/",
"text": "Dynamic Cacheing Nginx Reverse Proxy For Pacman\n\n\nYou set up a dynamic cacheing reverse proxy and then you put the ip address or hostname for that server in \n/etc/pacman.d/mirrorlist\n on your client machines.\n\n\nOf course if you want to you can set this up and run it in an\n\nNspawn Container\n.\nThe \nArchWiki Page for pacman tips\n\nmostly spells out what to do, but I want to document\nthe exact steps I would take.\n\n\nAs for how you would run this on a server with other virtual hosts?\nWho cares? That is what is so brilliant about using using an\nnspawn container, in that it behaves like just another\ncomputer on the lan with it's own ip address. But it only does one\nthing, and that's all you have to configure it for.\n\n\nI see no reason to use nginx-mainline instead of stable.\n\n\npacman -S nginx\n\n\n\n\nThe suggested configuration in the Arch Wiki\nis to create a directory \n/srv/http/pacman-cache\n,\nand that seems to work well enough\n\n\nmkdir /srv/http/pacman-cache\n# and then change it's ownershipt\nchown http:http /srv/http/pacman-cache\n\n\n\n\nnginx configuration\n\n\nand then it references an nginx.conf in\n\nthis gist\n,\nbut that is not a complete nginx.conf and so here is a method to get that\nworking as of July 2017 with a fresh install of nginx.\n\n\nYou can start with a default \n/etc/nginx/nginx.conf\n,\nand add the line \ninclude sites-enabled/*;\n\nat the end of the \nhttp\n section.\n\n\n# /etc/nginx/nginx.conf\n#user html;\nworker_processes 1;\n\n#error_log logs/error.log;\n#error_log logs/error.log notice;\n#error_log logs/error.log info;\n\n#pid logs/nginx.pid;\n\n\nevents {\n worker_connections 1024;\n}\n\n\nhttp {\n include mime.types;\n default_type application/octet-stream;\n\n #log_format main '$remote_addr - $remote_user [$time_local] \"$request\" '\n # '$status $body_bytes_sent \"$http_referer\" '\n # '\"$http_user_agent\" \"$http_x_forwarded_for\"';\n\n #access_log logs/access.log main;\n\n sendfile on;\n #tcp_nopush on;\n\n #keepalive_timeout 0;\n keepalive_timeout 65;\n\n #gzip on;\n\n server {\n listen 80;\n server_name localhost;\n\n #charset koi8-r;\n\n #access_log logs/host.access.log main;\n\n location / {\n root /usr/share/nginx/html;\n index index.html index.htm;\n }\n\n #error_page 404 /404.html;\n\n # redirect server error pages to the static page /50x.html\n #\n error_page 500 502 503 504 /50x.html;\n location = /50x.html {\n root /usr/share/nginx/html;\n }\n\n # proxy the PHP scripts to Apache listening on 127.0.0.1:80\n #\n #location ~ \\.php$ {\n # proxy_pass http://127.0.0.1;\n #}\n\n # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000\n #\n #location ~ \\.php$ {\n # root html;\n # fastcgi_pass 127.0.0.1:9000;\n # fastcgi_index index.php;\n # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;\n # include fastcgi_params;\n #}\n\n # deny access to .htaccess files, if Apache's document root\n # concurs with nginx's one\n #\n #location ~ /\\.ht {\n # deny all;\n #}\n }\n\n\n # another virtual host using mix of IP-, name-, and port-based configuration\n #\n #server {\n # listen 8000;\n # listen somename:8080;\n # server_name somename alias another.alias;\n\n # location / {\n # root html;\n # index index.html index.htm;\n # }\n #}\n\n\n # HTTPS server\n #\n #server {\n # listen 443 ssl;\n # server_name localhost;\n\n # ssl_certificate cert.pem;\n # ssl_certificate_key cert.key;\n\n # ssl_session_cache shared:SSL:1m;\n # ssl_session_timeout 5m;\n\n # ssl_ciphers HIGH:!aNULL:!MD5;\n # ssl_prefer_server_ciphers on;\n\n # location / {\n # root html;\n # index index.html index.htm;\n # }\n #}\n include sites-enabled/*;\n\n}\n\n\n\n\nAnd then create the directory \n/etc/nginx/sites-enabled\n\n\nmkdir /etc/nginx/sites-enabled\n\n\n\n\nAnd then create \n/etc/nginx/sites-enabled/proxy_cache.conf\n,\nwhich is \nmostly\n a\n\ncopy-and-paste from this gist\n.\n\n\nNotice the \nserver_name\n. This has to match the entry in\n\n/etc/pacman.d/mirrorlist\n on the client machines you are\nupdating from. If you can use the hostname, great. But if you\nhave to assign static ip addresses and explicitly write the local\nip address instead, then that should match what you write in your mirrorlist.\n\n\nAnd of course your mirrorlist entry\non the client machine, has to preserve the directory scheme.\n\n\n# /etc/pacman.d/mirrorlist\nServer = http://<hostname or ip address>:<port if not 80>/archlinux/$repo/os/$arch\n\n\n\n\n# /etc/nginx/sites-enabled/proxy_cache.conf\n# nginx may need to resolve domain names at run time\nresolver 8.8.8.8 8.8.4.4;\n\n# Pacman Cache\nserver\n{\nlisten 80;\nserver_name <hostname or ip address>; # has to match the entry in mirrorlist on client machine.\nroot /srv/http/pacman-cache;\nautoindex on;\n\n # Requests for package db and signature files should redirect upstream without caching\n # Well that's the default anyway.\n # But what if you're spinning up a lot of nspawn containers, don't want to waste all that bandwidth?\n # I choose to instead run a systemd timer that deletes the *db files once every 15 minutes\n location ~ \\.(db|sig)$ {\n try_files $uri @pkg_mirror;\n # proxy_pass http://mirrors$request_uri;\n }\n\n # Requests for actual packages should be served directly from cache if available.\n # If not available, retrieve and save the package from an upstream mirror.\n location ~ \\.tar\\.xz$ {\n try_files $uri @pkg_mirror;\n }\n\n # Retrieve package from upstream mirrors and cache for future requests\n location @pkg_mirror {\n proxy_store on;\n proxy_redirect off;\n proxy_store_access user:rw group:rw all:r;\n proxy_next_upstream error timeout http_404;\n proxy_pass http://mirrors$request_uri;\n }\n}\n\n# Upstream Arch Linux Mirrors\n# - Configure as many backend mirrors as you want in the blocks below\n# - Servers are used in a round-robin fashion by nginx\n# - Add \"backup\" if you want to only use the mirror upon failure of the other mirrors\n# - Separate \"server\" configurations are required for each upstream mirror so we can set the \"Host\" header appropriately\nupstream mirrors {\nserver localhost:8001;\nserver localhost:8002; # backup\nserver localhost:8003; # backup\n}\n\n# Arch Mirror 1 Proxy Configuration\nserver\n{\nlisten 8001;\nserver_name localhost;\n\n location / {\n proxy_pass http://mirrors.kernel.org$request_uri;\n proxy_set_header Host mirrors.kernel.org;\n }\n}\n\n# Arch Mirror 2 Proxy Configuration\nserver\n{\nlisten 8002;\nserver_name localhost;\n\n location / {\n proxy_pass http://mirrors.ocf.berkeley.edu$request_uri;\n proxy_set_header Host mirrors.ocf.berkeley.edu;\n }\n}\n\n# Arch Mirror 3 Proxy Configuration\nserver\n{\n listen 8003;\n server_name localhost;\n\n location / {\n proxy_pass http://mirrors.cat.pdx.edu$request_uri;\n proxy_set_header Host mirrors.cat.pdx.edu;\n }\n}\n\n\n\n\nsystemd service that cleans the proxy cache\n\n\ndon't enable the service, enable the timer\n\n\nsystemctl enable/start /etc/systemd/system/proxy_cache_clean.timer\n\n\n\n\nKeeps the 2 most recent versions of each package using paccache command.\n\n\n# /etc/systemd/system/proxy_cache_clean.service\n[Unit]\nDescription=Clean The pacman proxy cache\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/find /srv/http/pacman-cache/ -type d -exec /usr/bin/paccache -v -r -k 2 -c {} \\;\nStandardOutput=syslog\nStandardError=syslog\n\n\n\n\nsystemd timer for the systemd service that cleans the proxy cache\n\n\n# /etc/systemd/system/proxy_cache_clean.timer\n[Unit]\nDescription=Timer for clean The pacman proxy cache\n\n[Timer]\nOnBootSec=20min\nOnUnitActiveSec=100h\nUnit=proxy_cache_clean.service\n\n[Install]\nWantedBy=timers.target\n\n\n\n\nsystemd service that deletes the pacman database files from the proxy cache\n\n\ndon't enable the service, enable the timer\n\n\nsystemctl enable/start /etc/systemd/system/proxy_cache_database_clean.timer\n\n\n\n\nYou won't need this if you don't cache the database files. But if you do cache\nthe database files, then you'll just be stuck with old database files, unless\nyou periodically delete them. But I'm not sure about all this, will keep an\neye on things.\n\n\n# /etc/systemd/system/proxy_cache_database_clean.service\n[Unit]\nDescription=Clean The pacman proxy cache database\n\n[Service]\nType=oneshot\nExecStart=/bin/bash -c \"for f in $(find /srv -name *db) ; do rm $f; done\"\nStandardOutput=syslog\nStandardError=syslog\n\n\n\n\nsystemd timer for the systemd service that deletes the pacman database files from the proxy cache\n\n\n# /etc/systemd/system/proxy_cache_database_clean.timer\n[Unit]\nDescription=Timer for clean The pacman proxy cache database\n\n[Timer]\nOnBootSec=10min\nOnUnitActiveSec=15min\nUnit=proxy_cache_database_clean.service\n\n[Install]\nWantedBy=timers.target\n\n\n\n\nIf you prefer cron because the server is actually an ubuntu:16.04 LXD container\n\n\nMake sure single quote in the command here.\n\n\n#!/bin/bash\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin\n5,20,35,50 * * * * /bin/bash -c 'for f in $(find /var/www/html/pacman-cache -name *db) ; do rm $f; done'",
"title": "Dynamic Cacheing Nginx Reverse Proxy For Pacman"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#dynamic-cacheing-nginx-reverse-proxy-for-pacman",
"text": "",
"title": "Dynamic Cacheing Nginx Reverse Proxy For Pacman"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#you-set-up-a-dynamic-cacheing-reverse-proxy-and-then-you-put-the-ip-address-or-hostname-for-that-server-in-etcpacmandmirrorlist-on-your-client-machines",
"text": "Of course if you want to you can set this up and run it in an Nspawn Container .\nThe ArchWiki Page for pacman tips \nmostly spells out what to do, but I want to document\nthe exact steps I would take. As for how you would run this on a server with other virtual hosts?\nWho cares? That is what is so brilliant about using using an\nnspawn container, in that it behaves like just another\ncomputer on the lan with it's own ip address. But it only does one\nthing, and that's all you have to configure it for. I see no reason to use nginx-mainline instead of stable. pacman -S nginx The suggested configuration in the Arch Wiki\nis to create a directory /srv/http/pacman-cache ,\nand that seems to work well enough mkdir /srv/http/pacman-cache\n# and then change it's ownershipt\nchown http:http /srv/http/pacman-cache",
"title": "You set up a dynamic cacheing reverse proxy and then you put the ip address or hostname for that server in /etc/pacman.d/mirrorlist on your client machines."
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#nginx-configuration",
"text": "and then it references an nginx.conf in this gist ,\nbut that is not a complete nginx.conf and so here is a method to get that\nworking as of July 2017 with a fresh install of nginx. You can start with a default /etc/nginx/nginx.conf ,\nand add the line include sites-enabled/*; \nat the end of the http section. # /etc/nginx/nginx.conf\n#user html;\nworker_processes 1;\n\n#error_log logs/error.log;\n#error_log logs/error.log notice;\n#error_log logs/error.log info;\n\n#pid logs/nginx.pid;\n\n\nevents {\n worker_connections 1024;\n}\n\n\nhttp {\n include mime.types;\n default_type application/octet-stream;\n\n #log_format main '$remote_addr - $remote_user [$time_local] \"$request\" '\n # '$status $body_bytes_sent \"$http_referer\" '\n # '\"$http_user_agent\" \"$http_x_forwarded_for\"';\n\n #access_log logs/access.log main;\n\n sendfile on;\n #tcp_nopush on;\n\n #keepalive_timeout 0;\n keepalive_timeout 65;\n\n #gzip on;\n\n server {\n listen 80;\n server_name localhost;\n\n #charset koi8-r;\n\n #access_log logs/host.access.log main;\n\n location / {\n root /usr/share/nginx/html;\n index index.html index.htm;\n }\n\n #error_page 404 /404.html;\n\n # redirect server error pages to the static page /50x.html\n #\n error_page 500 502 503 504 /50x.html;\n location = /50x.html {\n root /usr/share/nginx/html;\n }\n\n # proxy the PHP scripts to Apache listening on 127.0.0.1:80\n #\n #location ~ \\.php$ {\n # proxy_pass http://127.0.0.1;\n #}\n\n # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000\n #\n #location ~ \\.php$ {\n # root html;\n # fastcgi_pass 127.0.0.1:9000;\n # fastcgi_index index.php;\n # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;\n # include fastcgi_params;\n #}\n\n # deny access to .htaccess files, if Apache's document root\n # concurs with nginx's one\n #\n #location ~ /\\.ht {\n # deny all;\n #}\n }\n\n\n # another virtual host using mix of IP-, name-, and port-based configuration\n #\n #server {\n # listen 8000;\n # listen somename:8080;\n # server_name somename alias another.alias;\n\n # location / {\n # root html;\n # index index.html index.htm;\n # }\n #}\n\n\n # HTTPS server\n #\n #server {\n # listen 443 ssl;\n # server_name localhost;\n\n # ssl_certificate cert.pem;\n # ssl_certificate_key cert.key;\n\n # ssl_session_cache shared:SSL:1m;\n # ssl_session_timeout 5m;\n\n # ssl_ciphers HIGH:!aNULL:!MD5;\n # ssl_prefer_server_ciphers on;\n\n # location / {\n # root html;\n # index index.html index.htm;\n # }\n #}\n include sites-enabled/*;\n\n} And then create the directory /etc/nginx/sites-enabled mkdir /etc/nginx/sites-enabled And then create /etc/nginx/sites-enabled/proxy_cache.conf ,\nwhich is mostly a copy-and-paste from this gist . Notice the server_name . This has to match the entry in /etc/pacman.d/mirrorlist on the client machines you are\nupdating from. If you can use the hostname, great. But if you\nhave to assign static ip addresses and explicitly write the local\nip address instead, then that should match what you write in your mirrorlist. And of course your mirrorlist entry\non the client machine, has to preserve the directory scheme. # /etc/pacman.d/mirrorlist\nServer = http://<hostname or ip address>:<port if not 80>/archlinux/$repo/os/$arch # /etc/nginx/sites-enabled/proxy_cache.conf\n# nginx may need to resolve domain names at run time\nresolver 8.8.8.8 8.8.4.4;\n\n# Pacman Cache\nserver\n{\nlisten 80;\nserver_name <hostname or ip address>; # has to match the entry in mirrorlist on client machine.\nroot /srv/http/pacman-cache;\nautoindex on;\n\n # Requests for package db and signature files should redirect upstream without caching\n # Well that's the default anyway.\n # But what if you're spinning up a lot of nspawn containers, don't want to waste all that bandwidth?\n # I choose to instead run a systemd timer that deletes the *db files once every 15 minutes\n location ~ \\.(db|sig)$ {\n try_files $uri @pkg_mirror;\n # proxy_pass http://mirrors$request_uri;\n }\n\n # Requests for actual packages should be served directly from cache if available.\n # If not available, retrieve and save the package from an upstream mirror.\n location ~ \\.tar\\.xz$ {\n try_files $uri @pkg_mirror;\n }\n\n # Retrieve package from upstream mirrors and cache for future requests\n location @pkg_mirror {\n proxy_store on;\n proxy_redirect off;\n proxy_store_access user:rw group:rw all:r;\n proxy_next_upstream error timeout http_404;\n proxy_pass http://mirrors$request_uri;\n }\n}\n\n# Upstream Arch Linux Mirrors\n# - Configure as many backend mirrors as you want in the blocks below\n# - Servers are used in a round-robin fashion by nginx\n# - Add \"backup\" if you want to only use the mirror upon failure of the other mirrors\n# - Separate \"server\" configurations are required for each upstream mirror so we can set the \"Host\" header appropriately\nupstream mirrors {\nserver localhost:8001;\nserver localhost:8002; # backup\nserver localhost:8003; # backup\n}\n\n# Arch Mirror 1 Proxy Configuration\nserver\n{\nlisten 8001;\nserver_name localhost;\n\n location / {\n proxy_pass http://mirrors.kernel.org$request_uri;\n proxy_set_header Host mirrors.kernel.org;\n }\n}\n\n# Arch Mirror 2 Proxy Configuration\nserver\n{\nlisten 8002;\nserver_name localhost;\n\n location / {\n proxy_pass http://mirrors.ocf.berkeley.edu$request_uri;\n proxy_set_header Host mirrors.ocf.berkeley.edu;\n }\n}\n\n# Arch Mirror 3 Proxy Configuration\nserver\n{\n listen 8003;\n server_name localhost;\n\n location / {\n proxy_pass http://mirrors.cat.pdx.edu$request_uri;\n proxy_set_header Host mirrors.cat.pdx.edu;\n }\n}",
"title": "nginx configuration"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#systemd-service-that-cleans-the-proxy-cache",
"text": "",
"title": "systemd service that cleans the proxy cache"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#dont-enable-the-service-enable-the-timer",
"text": "systemctl enable/start /etc/systemd/system/proxy_cache_clean.timer Keeps the 2 most recent versions of each package using paccache command. # /etc/systemd/system/proxy_cache_clean.service\n[Unit]\nDescription=Clean The pacman proxy cache\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/find /srv/http/pacman-cache/ -type d -exec /usr/bin/paccache -v -r -k 2 -c {} \\;\nStandardOutput=syslog\nStandardError=syslog",
"title": "don't enable the service, enable the timer"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#systemd-timer-for-the-systemd-service-that-cleans-the-proxy-cache",
"text": "# /etc/systemd/system/proxy_cache_clean.timer\n[Unit]\nDescription=Timer for clean The pacman proxy cache\n\n[Timer]\nOnBootSec=20min\nOnUnitActiveSec=100h\nUnit=proxy_cache_clean.service\n\n[Install]\nWantedBy=timers.target",
"title": "systemd timer for the systemd service that cleans the proxy cache"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#systemd-service-that-deletes-the-pacman-database-files-from-the-proxy-cache",
"text": "",
"title": "systemd service that deletes the pacman database files from the proxy cache"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#dont-enable-the-service-enable-the-timer_1",
"text": "systemctl enable/start /etc/systemd/system/proxy_cache_database_clean.timer You won't need this if you don't cache the database files. But if you do cache\nthe database files, then you'll just be stuck with old database files, unless\nyou periodically delete them. But I'm not sure about all this, will keep an\neye on things. # /etc/systemd/system/proxy_cache_database_clean.service\n[Unit]\nDescription=Clean The pacman proxy cache database\n\n[Service]\nType=oneshot\nExecStart=/bin/bash -c \"for f in $(find /srv -name *db) ; do rm $f; done\"\nStandardOutput=syslog\nStandardError=syslog",
"title": "don't enable the service, enable the timer"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#systemd-timer-for-the-systemd-service-that-deletes-the-pacman-database-files-from-the-proxy-cache",
"text": "# /etc/systemd/system/proxy_cache_database_clean.timer\n[Unit]\nDescription=Timer for clean The pacman proxy cache database\n\n[Timer]\nOnBootSec=10min\nOnUnitActiveSec=15min\nUnit=proxy_cache_database_clean.service\n\n[Install]\nWantedBy=timers.target",
"title": "systemd timer for the systemd service that deletes the pacman database files from the proxy cache"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#if-you-prefer-cron-because-the-server-is-actually-an-ubuntu1604-lxd-container",
"text": "Make sure single quote in the command here. #!/bin/bash\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin\n5,20,35,50 * * * * /bin/bash -c 'for f in $(find /var/www/html/pacman-cache -name *db) ; do rm $f; done'",
"title": "If you prefer cron because the server is actually an ubuntu:16.04 LXD container"
},
{
"location": "/freebsd_jails_on_freenas/",
"text": "FreeBSD Jails on FreeNAS\n\n\nMostly a personal distillation for getting a FreeBSD\nJail up and running on FreeNAS.\n\n\nIn The FreeNAS WebGui, Create A New Jail\n\n\nThe default networking configuration, will give\nyour jail an ip address on the lan. For now, I've\ndecided to just share a pkg cache with each jail.\nNavigate to \nJails -> Storage -> Add Storage\n and\nadd the \npkg\n storage directory to \n/var/cache/pkg\n\ninside the jail. \n\n\nFor instance, on my local FreeNAS server,\nthe pkg directory is at /mnt/VolumeOne/pkg/.\n\n\nIf you ssh into the host server, you can type the command\n\njls\n, to list the jails. Based on the output of the\ncommand \njls\n, you can get a shell with \njexec <jail number>\n\nof \njexec <jail hostname>\n.\n\n\nupdating\n\n\nHow about the command \npkg audit -F\n? Downloads a\nlist of known security issues and checks your system\nagainst that.\n\n\nI would recommend, to myself anyway, to shell into\nthe new jail with \njexec\n, run \npkg upgrade\n to install any new packages,\nand then from the FreeNAS webgui, restart the jail. Although\nthe restarted jail will have a new jail number as reported by\nthe \njls\n command.\n\n\nlocale\n\n\nWhen you use \njexec\n to get a shell, you get an environment\nwith an utf_8 locale. Not so if you ssh into the new jail.\nFor this put the following contents into ~/.login_conf\n\n\n# ~/.login_conf\nme:\\\n :charset=UTF-8:\\\n :lang=en_US.UTF-8:\\\n :setenv=LC_COLLATE=C:\n\n\n\n\nssh\n\n\nTo get ssh running, edit \n/etc/rc.conf\n inside the jail.\n\n\n# /etc/rc.conf\nsshd_enable=\"YES\"\n\n\n\n\nTo start sshd immediately, make any necessary edits to\n/etc/ssh/sshd_config, and run the following command.\n\n\nservice sshd start\n\n\n\n\nByobu\n\n\nYou'll need newt to configure byobu, and if you don't install tmux\nthen screen will become the backend.\n\n\npkg install byobu tmux newt\n\n\n\n\nIf you execute \nbyobu-config\n, by pressing \nf9\n, the\nfollowing options seem to work. Some options, of course,\nwill prevent others from working so you have to enable them\none at a time to see what happens.\n\n\n\n\ndate\n\n\ndisk\n\n\ndistro\n\n\nhostname\n\n\nip address\n\n\nload_average\n\n\nlogo\n\n\ntime\n\n\nuptime\n\n\nusers\n\n\nwhoami\n\n\n\n\nvim\n\n\nVia pkg, there are two options: vim and vim-lite. Note vim will pull\nin a whole bunch of gui dependancies, but vim-lite is not build with python.\n\n\nFor instance, powerline will not work with vim-lite because it's not built with\npython. Also, vim-youcompleteme will not work with vim-lite. However, lightline\nwill work with vim-lite, and VimCompletesMe will work with vim-lite.\n\n\nTo get lightline working update $TERM\n\n\n# ~/.config/fish/config.fish\nexport TERM=xterm-256color\n\n\n\n\nAnd vimrc\n\n\n# ~/.vimrc\nset ls=2\n\n\n\n\nAnother option is to build vim from source via ports. You can prevent vim\nfrom pulling in a bunch of gui dependancies with the following in /etc/make.conf.\n\n\n# /etc/make.conf\nWITHOUT_X11=yes\n\n\n\n\nAnd then when you compile vim from ports, run \nmake config\n where you can enable\npython.\n\n\npython\n\n\nFor python3 virtualenv\n\n\nvirtualenv-3.6 <directory>\n\n\n\n\nrunning gitit under the supervision of supervisord\n\n\npy27-supervisor and hs-gitit are available as pkg install, if you want to\nrun a gitit wiki.\n\n\ngitit doesn't come with an init service. To generate a sample config,\nrun \ngitit --print-default-config > gitit.conf\n, and then if you want\nyou can reference gitit.conf by passing gitit the \n-f\n flag.\n\n\nSo for instance, after you install supervisord, add something like the\nfollowing to the end of \n/usr/local/etc/supervisord.conf\n, and create\nthe directory \n/var/log/supervisor/\n.\n\n\n[program:gitit]\nuser=<user>\ndirectory=/path/to/wikidata/directory/\ncommand=/usr/local/bin/gitit -f /usr/local/etc/gitit.conf\nstdout_logfile=/var/log/supervisor/%(program_name)s.log\nstderr_logfile=/var/log/supervisor/%(program_name)s.log\nautorestart=true\n\n\n\n\nsupervisord is a service you can enable in\n\n/etc/rc.conf\n\n\n# /etc/rc.conf\nsupervisord_enable=\"YES\"\n\n\n\n\nand then start with \nservice supervisord start\n\nwhen you get supervisord running, you can start a\nsupervisorctl shell, i.e.\n\n\nsupervisorctl\nsupervisor> status\n# outputs\ngitit RUNNING pid 98057, uptime 0:32:27\nsupervisor> start/restart/stop gitit\nsupervisor> exit\n\n\n\n\nBut there is one other little detail, in that when you try to\nrun gitit as a daemon like this, on FreeBSD it will fail because it can't\nfind git. But the symlink solution is easy enough.\n\n\nln -s /usr/local/bin/git /usr/bin/\n\n\n\n\nAnd you might as well stick a reverse proxy in front of it. Assuming\nyou configure gitit listen only on localhost:5001, install nginx.\n\npkg install nginx\n\n\nenable nginx in /etc/rc.conf\n\n\nnginx_enable=\"YES\"\n\n\n\n\nThen, in the file \n/usr/local/etc/nginx/nginx.conf\n change the location \"\n/\n\"\nso that it looks like this.\n\n\n{\n.....\n location / {\n # root /usr/local/www/nginx;\n # index index.html index.htm;\n proxy_pass http://127.0.0.1:5001;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n }\n....\n}\n\n\n\n\nand then start nginx \nservice nginx start",
"title": "FreeBSD Jails on FreeNAS"
},
{
"location": "/freebsd_jails_on_freenas/#freebsd-jails-on-freenas",
"text": "Mostly a personal distillation for getting a FreeBSD\nJail up and running on FreeNAS.",
"title": "FreeBSD Jails on FreeNAS"
},
{
"location": "/freebsd_jails_on_freenas/#in-the-freenas-webgui-create-a-new-jail",
"text": "The default networking configuration, will give\nyour jail an ip address on the lan. For now, I've\ndecided to just share a pkg cache with each jail.\nNavigate to Jails -> Storage -> Add Storage and\nadd the pkg storage directory to /var/cache/pkg \ninside the jail. For instance, on my local FreeNAS server,\nthe pkg directory is at /mnt/VolumeOne/pkg/. If you ssh into the host server, you can type the command jls , to list the jails. Based on the output of the\ncommand jls , you can get a shell with jexec <jail number> \nof jexec <jail hostname> .",
"title": "In The FreeNAS WebGui, Create A New Jail"
},
{
"location": "/freebsd_jails_on_freenas/#updating",
"text": "How about the command pkg audit -F ? Downloads a\nlist of known security issues and checks your system\nagainst that. I would recommend, to myself anyway, to shell into\nthe new jail with jexec , run pkg upgrade to install any new packages,\nand then from the FreeNAS webgui, restart the jail. Although\nthe restarted jail will have a new jail number as reported by\nthe jls command.",
"title": "updating"
},
{
"location": "/freebsd_jails_on_freenas/#locale",
"text": "When you use jexec to get a shell, you get an environment\nwith an utf_8 locale. Not so if you ssh into the new jail.\nFor this put the following contents into ~/.login_conf # ~/.login_conf\nme:\\\n :charset=UTF-8:\\\n :lang=en_US.UTF-8:\\\n :setenv=LC_COLLATE=C:",
"title": "locale"
},
{
"location": "/freebsd_jails_on_freenas/#ssh",
"text": "To get ssh running, edit /etc/rc.conf inside the jail. # /etc/rc.conf\nsshd_enable=\"YES\" To start sshd immediately, make any necessary edits to\n/etc/ssh/sshd_config, and run the following command. service sshd start",
"title": "ssh"
},
{
"location": "/freebsd_jails_on_freenas/#byobu",
"text": "You'll need newt to configure byobu, and if you don't install tmux\nthen screen will become the backend. pkg install byobu tmux newt If you execute byobu-config , by pressing f9 , the\nfollowing options seem to work. Some options, of course,\nwill prevent others from working so you have to enable them\none at a time to see what happens. date disk distro hostname ip address load_average logo time uptime users whoami",
"title": "Byobu"
},
{
"location": "/freebsd_jails_on_freenas/#vim",
"text": "Via pkg, there are two options: vim and vim-lite. Note vim will pull\nin a whole bunch of gui dependancies, but vim-lite is not build with python. For instance, powerline will not work with vim-lite because it's not built with\npython. Also, vim-youcompleteme will not work with vim-lite. However, lightline\nwill work with vim-lite, and VimCompletesMe will work with vim-lite. To get lightline working update $TERM # ~/.config/fish/config.fish\nexport TERM=xterm-256color And vimrc # ~/.vimrc\nset ls=2 Another option is to build vim from source via ports. You can prevent vim\nfrom pulling in a bunch of gui dependancies with the following in /etc/make.conf. # /etc/make.conf\nWITHOUT_X11=yes And then when you compile vim from ports, run make config where you can enable\npython.",
"title": "vim"
},
{
"location": "/freebsd_jails_on_freenas/#python",
"text": "For python3 virtualenv virtualenv-3.6 <directory>",
"title": "python"
},
{
"location": "/freebsd_jails_on_freenas/#running-gitit-under-the-supervision-of-supervisord",
"text": "py27-supervisor and hs-gitit are available as pkg install, if you want to\nrun a gitit wiki. gitit doesn't come with an init service. To generate a sample config,\nrun gitit --print-default-config > gitit.conf , and then if you want\nyou can reference gitit.conf by passing gitit the -f flag. So for instance, after you install supervisord, add something like the\nfollowing to the end of /usr/local/etc/supervisord.conf , and create\nthe directory /var/log/supervisor/ . [program:gitit]\nuser=<user>\ndirectory=/path/to/wikidata/directory/\ncommand=/usr/local/bin/gitit -f /usr/local/etc/gitit.conf\nstdout_logfile=/var/log/supervisor/%(program_name)s.log\nstderr_logfile=/var/log/supervisor/%(program_name)s.log\nautorestart=true supervisord is a service you can enable in /etc/rc.conf # /etc/rc.conf\nsupervisord_enable=\"YES\" and then start with service supervisord start \nwhen you get supervisord running, you can start a\nsupervisorctl shell, i.e. supervisorctl\nsupervisor> status\n# outputs\ngitit RUNNING pid 98057, uptime 0:32:27\nsupervisor> start/restart/stop gitit\nsupervisor> exit But there is one other little detail, in that when you try to\nrun gitit as a daemon like this, on FreeBSD it will fail because it can't\nfind git. But the symlink solution is easy enough. ln -s /usr/local/bin/git /usr/bin/ And you might as well stick a reverse proxy in front of it. Assuming\nyou configure gitit listen only on localhost:5001, install nginx. pkg install nginx enable nginx in /etc/rc.conf nginx_enable=\"YES\" Then, in the file /usr/local/etc/nginx/nginx.conf change the location \" / \"\nso that it looks like this. {\n.....\n location / {\n # root /usr/local/www/nginx;\n # index index.html index.htm;\n proxy_pass http://127.0.0.1:5001;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n }\n....\n} and then start nginx service nginx start",
"title": "running gitit under the supervision of supervisord"
},
{
"location": "/arch_redis_nspawn/",
"text": "Quick Dirty Redis Nspawn Container on Arch Linux\n\n\nRefer to the \nNspawn\n page for setting up the nspawn container,\ninstall redis, and start/enable redis.service.\nOnce you have the container running, it seems all you have to do to get\nthings working in a container subnet is to change the bind address.\n\n\n# /etc/redis.conf\n# bind 127.0.0.1\nbind 0.0.0.0\n\n\n\n\nyou can nmap port 6379, be sure to restart redis\n\n\nAgain I would refer you to the Arch Wiki",
"title": "Quick Dirty Redis Nspawn Container on Arch Linux"
},
{
"location": "/arch_redis_nspawn/#quick-dirty-redis-nspawn-container-on-arch-linux",
"text": "Refer to the Nspawn page for setting up the nspawn container,\ninstall redis, and start/enable redis.service.\nOnce you have the container running, it seems all you have to do to get\nthings working in a container subnet is to change the bind address. # /etc/redis.conf\n# bind 127.0.0.1\nbind 0.0.0.0 you can nmap port 6379, be sure to restart redis Again I would refer you to the Arch Wiki",
"title": "Quick Dirty Redis Nspawn Container on Arch Linux"
},
{
"location": "/arch_postgresql_nspawn/",
"text": "Quick Dirty Postgresql Nspawn Container on Arch Linux\n\n\nRefer to the \nNspawn\n page for setting up the nspawn container.\n\nAnd then refer the \nArchWiki instructions\n\nfor postgresql. \n\n\nYou'll want to install postgresql, set a password for the default user \npostgres\n,\nand then login as postgres and initilize the database. \n\n\npacman -S postgresql\n# passwd for postgresql user \npasswd postgres \n# login as postgres \nsu -l postgres\n# initialize the databse cluster\n[postgres]$ initdb --locale $LANG -E UTF8 -D '/var/lib/postgres/data'\n\n\n\n\nYou'll need to configure \n/var/lib/postgres/data/pg_hba.conf\n and\n\n/var/lib/postgres/data/postgresql.conf\n for remote access,\npresumably with an identd daemon in mind. The ident daemon will\nlisten on port 113, not on the machine with the database server,\nbut it listens from the machine where is the client that remotely\nwants to access the database.",
"title": "Quick Dirty Postgresql Nspawn Container on Arch Linux"
},
{
"location": "/arch_postgresql_nspawn/#quick-dirty-postgresql-nspawn-container-on-arch-linux",
"text": "Refer to the Nspawn page for setting up the nspawn container. \nAnd then refer the ArchWiki instructions \nfor postgresql. You'll want to install postgresql, set a password for the default user postgres ,\nand then login as postgres and initilize the database. pacman -S postgresql\n# passwd for postgresql user \npasswd postgres \n# login as postgres \nsu -l postgres\n# initialize the databse cluster\n[postgres]$ initdb --locale $LANG -E UTF8 -D '/var/lib/postgres/data' You'll need to configure /var/lib/postgres/data/pg_hba.conf and /var/lib/postgres/data/postgresql.conf for remote access,\npresumably with an identd daemon in mind. The ident daemon will\nlisten on port 113, not on the machine with the database server,\nbut it listens from the machine where is the client that remotely\nwants to access the database.",
"title": "Quick Dirty Postgresql Nspawn Container on Arch Linux"
},
{
"location": "/misc_tips_troubleshooting/",
"text": "Misc Tips, TroubleShooting\n\n\nSending commands to LXD containers\n\n\nUse \nbash -c \"<command>\"\n for commands with wildcards. i.e.\n\n\nfor machine in $(lxc list | grep RUNNING | awk '{print $2}') ;\\\n do lxc exec \"${machine}\" -- bash -c \"cat /etc/apt/apt.conf.d/02*\" ; done\n\n\n\n\nfish shell is actually a little bit cleaner\n\n\nfor machine in (lxc list | grep RUNNING | awk '{print $2}') ; \\\n lxc exec $machine -- bash -c \"cat /etc/apt/apt.conf.d/02*\" ; end\n\n\n\n\n# change all their time zones\nfor machine in (lxc list | grep RUNNING | awk '{print $2}') ; \\\n lxc exec $machine -- bash -c \"timedatectl set-timezone America/Los_Angeles\" ; end\n# check to see if anyone is logged in before rebooting\nfor machine in (lxc list | grep RUNNING | awk '{print $2}') ; echo ; \\\n echo $machine ; lxc exec $machine -- bash -c \"who\" ; end \n\n\n\n\nMove LXD container to another Server\n\n\n# stop the container\nlxc stop <container name>\n# publish image of container to local *storage*\nlxc publish <container name> --alias <image name>\n# export the new image to tarball\nlxc image export <image name>\n# scp tarball to other box\nscp a4762b114fecee2e2bc227b9032405642c5286c02009babef7953e011e597bfe.tar.gz server:\n# on other box import the image\nlxc image import <tarball file name> --alias <image name>\n# launch container\nlxc launch <image name> <container name>\n# assign profile to container\nlxc profile assign <container name> <profile name>\n\n\n\n\nShell into the new running container, update any network interface\nconfigurations that you need to, and then restart the container.\n\n\nSee also\n\nLXD Container Home Server Networking For Dummies\n\n\nUbuntu-Mate-Welcome-Center doesn't work for some repos\n\n\nPerhaps your apt-cacher-ng proxy server isn't configured to allow \ntraffic through from https sources. Make sure the following is\nuncommented. This applies for all PPA's that use https.\n\n\n# /etc/apt-cacher-ng/acng.conf\nPassThroughPattern: .*\n\n\n\n\nQuitting Mosh\n\n\nThe key combination to quit mosh it \nctrl+6+.\n\nAlso, WTF?\n\n\nUpdating Caddy Server\n\n\nYou update Caddy Server with a new Go Binary, try to restart caddy.service, and it fails.\nMaybe you get an error message such as the following \nlisten tcp :80: bind: permission denied\n and/or\n\nlisten tcp :443: bind: permission denied\n.\n\nFix this error with the following command \nsudo setcap CAP_NET_BIND_SERVICE=+eip /path/to/caddy\n\n\nZFS Disc Error Disc Identification\n\n\nYou created a zpool using /dev/disk-by-id to specify the devices, and now you want to figure out\nwhich disks are causing you trouble. For instance, your system log, \njournalctl | grep -i fail\n\nshows read error on /dev/sdc. \n\n\nYou can use \nlsblk -o MODEL,SERIAL\n to match the information generated by \nzpool status\n.\n\n\nByobu/Tmux copy mode\n\n\n\n\n\n\nEnter Copy Mode \n\n\n\n\nin byobu use key \nF7\n\n\nin tmux use \n<prefix> [\n\n\n\n\n\n\n\n\nNavigate around less/copy buffer using \nh,j,k,l\n\n\n\n\n\n\nSelect text\n\n\n\n\n<space>\n begins text selection\n\n\nmove cursor around using \nh,j,k,l\n\n\n<enter>\n ends text selection\n\n\n\n\n\n\n\n\nPaste selection in any tmux/byobu window\n\n\n\n\nin byobu use \nalt+insert\n\n\nin tmux use \n<prefix> ]",
"title": "Misc Tips, Trouble Shooting"
},
{
"location": "/misc_tips_troubleshooting/#misc-tips-troubleshooting",
"text": "",
"title": "Misc Tips, TroubleShooting"
},
{
"location": "/misc_tips_troubleshooting/#sending-commands-to-lxd-containers",
"text": "Use bash -c \"<command>\" for commands with wildcards. i.e. for machine in $(lxc list | grep RUNNING | awk '{print $2}') ;\\\n do lxc exec \"${machine}\" -- bash -c \"cat /etc/apt/apt.conf.d/02*\" ; done fish shell is actually a little bit cleaner for machine in (lxc list | grep RUNNING | awk '{print $2}') ; \\\n lxc exec $machine -- bash -c \"cat /etc/apt/apt.conf.d/02*\" ; end # change all their time zones\nfor machine in (lxc list | grep RUNNING | awk '{print $2}') ; \\\n lxc exec $machine -- bash -c \"timedatectl set-timezone America/Los_Angeles\" ; end\n# check to see if anyone is logged in before rebooting\nfor machine in (lxc list | grep RUNNING | awk '{print $2}') ; echo ; \\\n echo $machine ; lxc exec $machine -- bash -c \"who\" ; end",
"title": "Sending commands to LXD containers"
},
{
"location": "/misc_tips_troubleshooting/#move-lxd-container-to-another-server",
"text": "# stop the container\nlxc stop <container name>\n# publish image of container to local *storage*\nlxc publish <container name> --alias <image name>\n# export the new image to tarball\nlxc image export <image name>\n# scp tarball to other box\nscp a4762b114fecee2e2bc227b9032405642c5286c02009babef7953e011e597bfe.tar.gz server:\n# on other box import the image\nlxc image import <tarball file name> --alias <image name>\n# launch container\nlxc launch <image name> <container name>\n# assign profile to container\nlxc profile assign <container name> <profile name> Shell into the new running container, update any network interface\nconfigurations that you need to, and then restart the container. See also LXD Container Home Server Networking For Dummies",
"title": "Move LXD container to another Server"
},
{
"location": "/misc_tips_troubleshooting/#ubuntu-mate-welcome-center-doesnt-work-for-some-repos",
"text": "Perhaps your apt-cacher-ng proxy server isn't configured to allow \ntraffic through from https sources. Make sure the following is\nuncommented. This applies for all PPA's that use https. # /etc/apt-cacher-ng/acng.conf\nPassThroughPattern: .*",
"title": "Ubuntu-Mate-Welcome-Center doesn't work for some repos"
},
{
"location": "/misc_tips_troubleshooting/#quitting-mosh",
"text": "The key combination to quit mosh it ctrl+6+. \nAlso, WTF?",
"title": "Quitting Mosh"
},
{
"location": "/misc_tips_troubleshooting/#updating-caddy-server",
"text": "You update Caddy Server with a new Go Binary, try to restart caddy.service, and it fails.\nMaybe you get an error message such as the following listen tcp :80: bind: permission denied and/or listen tcp :443: bind: permission denied . \nFix this error with the following command sudo setcap CAP_NET_BIND_SERVICE=+eip /path/to/caddy",
"title": "Updating Caddy Server"
},
{
"location": "/misc_tips_troubleshooting/#zfs-disc-error-disc-identification",
"text": "You created a zpool using /dev/disk-by-id to specify the devices, and now you want to figure out\nwhich disks are causing you trouble. For instance, your system log, journalctl | grep -i fail \nshows read error on /dev/sdc. You can use lsblk -o MODEL,SERIAL to match the information generated by zpool status .",
"title": "ZFS Disc Error Disc Identification"
},
{
"location": "/misc_tips_troubleshooting/#byobutmux-copy-mode",
"text": "Enter Copy Mode in byobu use key F7 in tmux use <prefix> [ Navigate around less/copy buffer using h,j,k,l Select text <space> begins text selection move cursor around using h,j,k,l <enter> ends text selection Paste selection in any tmux/byobu window in byobu use alt+insert in tmux use <prefix> ]",
"title": "Byobu/Tmux copy mode"
},
{
"location": "/self_signed_certs/",
"text": "Setting up Self-Signed Certs\n\n\nThis \njamielinux\n\nblog post looks promising.",
"title": "Self Signed Certs"
},
{
"location": "/self_signed_certs/#setting-up-self-signed-certs",
"text": "This jamielinux \nblog post looks promising.",
"title": "Setting up Self-Signed Certs"
},
{
"location": "/selfoss_on_centos7/",
"text": "Selfoss on Centos 7\n\n\nThe target here is a very low resource vps running Centos7.\nYou can use mysql or postgresql, but performance is fine with sqlite database.\nYou'll need the epel repo in order to install python2-certbot-apache.\n\n\nHere's a great guide for setting up apache with letsencrypt on Centos7\n.\n\n\nYou'll want to install the following packages\n\n\n\n\nmod_ssl\n\n\npython2-certbot-apache\n\n\nphp\n\n\nphp-gd\n\n\nphp-http\n\n\nphp-pdo\n\n\nunzip\n\n\nwget\n\n\n\n\nThe documentation\n explains how to set up\nthe config.ini and .htaccess files, RewriteEngine, RewriteBase,\ndatabase, and explains the apache modules that you\nwant enabled. Hint, use \napachectl -M\n, \napachectl help\n, etc.\n\n\nYou'll probably want to extract the \napplication to \n/var/www/html/selfoss/\n or similar, and then add a configuration.\n\n\n# /etc/httpd/conf.d/selfoss.conf\nAlias \"/selfoss/\" \"/var/www/html/selfoss/\"\n<Directory \"/var/www/html/selfoss\">\n Options FollowSymLinks\n AllowOverride All\n</Directory>\n\n\n\n\nMake sure that the selfoss directory is owned by apache:apache.",
"title": "Selfoss on Centos7"
},
{
"location": "/selfoss_on_centos7/#selfoss-on-centos-7",
"text": "The target here is a very low resource vps running Centos7.\nYou can use mysql or postgresql, but performance is fine with sqlite database.\nYou'll need the epel repo in order to install python2-certbot-apache. Here's a great guide for setting up apache with letsencrypt on Centos7 . You'll want to install the following packages mod_ssl python2-certbot-apache php php-gd php-http php-pdo unzip wget The documentation explains how to set up\nthe config.ini and .htaccess files, RewriteEngine, RewriteBase,\ndatabase, and explains the apache modules that you\nwant enabled. Hint, use apachectl -M , apachectl help , etc. You'll probably want to extract the \napplication to /var/www/html/selfoss/ or similar, and then add a configuration. # /etc/httpd/conf.d/selfoss.conf\nAlias \"/selfoss/\" \"/var/www/html/selfoss/\"\n<Directory \"/var/www/html/selfoss\">\n Options FollowSymLinks\n AllowOverride All\n</Directory> Make sure that the selfoss directory is owned by apache:apache.",
"title": "Selfoss on Centos 7"
},
{
"location": "/stupid_package_manager_tricks/",
"text": "Stupid Package Manager Tricks\n\n\napt, apt-get ,aptitude, dpkg\n\n\nWait what was that list of suggested packages?\n\n\napt-cache depends <package>\n\nor\n\napt-cache depends <package> | grep -i Suggests\n\n\nWhat versions of a package are available, (based on currently configured repositories)?\n\n\napt-cache madison <package>",
"title": "Stupid Package Manager Tricks"
},
{
"location": "/stupid_package_manager_tricks/#stupid-package-manager-tricks",
"text": "",
"title": "Stupid Package Manager Tricks"
},
{
"location": "/stupid_package_manager_tricks/#apt-apt-get-aptitude-dpkg",
"text": "Wait what was that list of suggested packages? apt-cache depends <package> \nor apt-cache depends <package> | grep -i Suggests What versions of a package are available, (based on currently configured repositories)? apt-cache madison <package>",
"title": "apt, apt-get ,aptitude, dpkg"
},
{
"location": "/stupid_kvm_tricks/",
"text": "Stupid KVM Tricks\n\n\nvirt-install ubuntu16.04\n\n\nCreate the disk image\n\n\nqemu-img create -f qcow2 /var/lib/libvirt/images/xenial.qcow2 20G\n\n\nCommand to run the install\n\n\nvirt-install \\\n --name xenial \\\n --ram 4096 \\\n --disk path=/var/lib/libvirt/images/xenial.qcow2,size=20 \\\n --vcpus 4 \\\n --os-type linux \\\n --os-variant ubuntu16.04 \\\n --network bridge=br0 \\\n --graphics none \\\n --console pty,target_type=serial \\\n --location ./ubuntu-16.04.3-server-amd64.iso \\\n --extra-args 'console=ttyS0,115200n8 serial'\n\n\n\n\nvirt-install Arch Linux\n\n\nThe \n--extra-args\n option lets you use a serial console. But the\n\n--extra-args\n option only works if you also use an \n--location\n\noption. But the \n--location\n option can only be used with certain isos.\nSo use \n--cdrom\n instead of \n--location\n, drop the \n--extra-args\n,\nand instruct the kernel to boot with a serial console with a parameter\nat the boot splash screen.\n\n\nqemu-img create -f qcow2 /var/lib/libvirt/images/arch.qcow2 20G\n\nvirt-install --name arch --ram 4096 \\\n --disk path=/var/lib/libvirt/images/arch.qcow2,size=20 \\\n --vcpus 2 \\\n --os-type linux \\\n --os-variant ubuntu16.04 \\ \n --network bridge=virbr0 \\\n --graphics none \\ \n --console pty,target_type=serial \\\n --cdrom /var/lib/libvirt/images/archlinux-2018.02.01-x86_64.iso\n\n\n\n\nthe arch boot splash screen will appear in your terminal and you can \ntap the \"tab\" key to edit boot parameters\n\n\nadd \"console=ttyS0\" to kernel command line parameters\n\n\nbefore\n\n\n> .linux boot/x86_64/vmlinuz archisobasedir=arch archisolabel=ARCH_201802 initrd=boot/intel_ucode.img,boot/x86_64/archiso.img\n\n\n\n\nafter\n\n\n> .linux boot/x86_64/vmlinuz archisobasedir=arch archisolabel=ARCH_201802 initrd=boot/intel_ucode.img,boot/x86_64/archiso.img console=ttyS0\n\n\n\n\n\narch boots ...\n...\n...\n...\n\nroot@archiso ~ # lsblk\nNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT\nloop0 7:0 0 432M 1 loop /run/archiso/sfs/airootfs\nsr0 11:0 1 539M 0 rom /run/archiso/bootmnt\nvda 254:0 0 20G 0 disk \nroot@archiso ~ # \n\n\n\n\nChange the Network Interface\n\n\nbr0 gets addresses from the network router, but what if you want\nyour vm to have be on the virbr0 192.168.122.0/24 subnet?\n\n\nvirsh edit xenial\n\n\nAnd then 'J' all the way down to the bottom, change the interface name from br0 to\nvirbr0, \n\n\nvirsh start xenial\n\n\nand then look for the machine with nmap\n\n\nnmap -sn 192.168.122.0/24\n\n\nClone the VM\n\n\nIn this case we don't have to pre-allocate the disk image because virt-clone will do that\nfor us.\n\n\nvirt-clone --original xenial --name xenial-clone \\\n --file /var/lib/libvirt/images/xenial-clone.qcow2\n\n\n\n\nClone the VM to another Machine\n\n\nFirst dump the xml that defines the virtual machine.\n\n\nvirsh dumpxml xenial > xenial.xml\n\n\n\n\nThen copy both \nxenial.xml\n and \nxenial.qcow2\n to the new host machine. On the new kvm\nhost you'll want to at least make sure your vm has the correct CPU architecture.\nThe command to get a list of supported kvm cpu architectures is:\n\n\nvirsh cpu-models <arch>\n# i.e.\nvirsh cpu-models x86_64\n\n\n\n\nAfter you edit \nxenial.xml\n and update the correct cpu architecture, mv \nxenial.qcow2\n\nto \n/var/lib/libvirt/images/\n, clone it. \nvirt-clone\n will handle generating new\nmac addresses for the network interfaces.\n\n\n <cpu mode='custom' match='exact'>\n <model fallback='allow'>Haswell-noTSX</model>\n </cpu>\n# i.e. change to above to\n <cpu mode='custom' match='exact'>\n <model fallback='allow'>SandyBridge</model>\n </cpu>\n\n\n\n\n\nvirt-clone --original-xml xenial.xml --name xenial-clone \\\n --file /var/lib/libvirt/images/xenial-clone.qcow2\n\n\n\n\nWhat is the os-type and os-variant type names?\n\n\nosinfo-query os\n\n\nmisc\n\n\n\n\nStart the vm \nvirsh start xenial\n \n\n\nList all the vms \nvirsh list --all\n \n\n\nStop the vm \nvirsh destroy xenial\n \n\n\nDelete the vm \nvirsh undefine xenial\n \n\n\n\n\nvirsh help\n\n\nThe \nvirsh help\n command returns a long chart of help information. But each section has\na keyword.\n\n\nTake for instance the command \nvirsh help monitor\n. From this we\nsee the \ndomiflist\n subcommand (among others). Unfortunately \ndomifaddr\n doesn't seem to\nwork on the Ubuntu:16.04 host, but there are other ways to find the ip address of\na virtual machine.\n\n\nSo now if you want to see what host interface the vm \nxenial\n is attached to,\ntype. \n\n\nvirsh domiflist xenial\n\n\n\n\nwhich returns:\n\n\nInterface Type Source Model MAC\n-------------------------------------------------------\nvnet1 bridge virbr0 virtio 52:54:00:58:bf:75\n\n\n\n\nSo now we can find the address of virbr0 on the host machine.\n\n\nifconfig virbr0\n\n\n\n\nwhich returns:\n\n\nvirbr0 Link encap:Ethernet HWaddr 52:54:00:38:87:38 \n inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0\n UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1\n RX packets:1351 errors:0 dropped:0 overruns:0 frame:0\n TX packets:3037 errors:0 dropped:0 overruns:0 carrier:0\n collisions:0 txqueuelen:1000 \n RX bytes:232346 (232.3 KB) TX bytes:502916 (502.9 KB)\n\n\n\n\nand thus we know what subnet to scan with nmap to find the ip address of the vm\n\n\nnmap -sn 192.168.122.0/24\n\n\n\n\nSnapshots\n\n\nCreate snapshot of vm \ndcing\n\n\nvirsh snapshot-create-as --domain dcing --name dcing-snap0\n\n\n\n\nBut you don't need to name your snapshots because they are listed by time.\n\n\nvirsh snapshot-create --domain dcing\n\n\n\n\nList snapshots for vm \ndcing\n\n\nvirsh snapshot-list --domain dcing\n\n Name Creation Time State\n------------------------------------------------------------\n 1518366561 2018-02-11 08:29:21 -0800 shutoff\n dcing-snap0 2018-02-11 08:22:57 -0800 shutoff\n\n\n\n\nRevert dcing to snap0\n\n\nvirsh snapshot-revert --domain dcing --snapshotname dcing-snap0\n\n\n\n\nDelete snapshot\n\n\nvirsh snapshot-delete --domain dcing --snapshotname dcing-snap0",
"title": "Stupid KVM Tricks"
},
{
"location": "/stupid_kvm_tricks/#stupid-kvm-tricks",
"text": "",
"title": "Stupid KVM Tricks"
},
{
"location": "/stupid_kvm_tricks/#virt-install-ubuntu1604",
"text": "Create the disk image qemu-img create -f qcow2 /var/lib/libvirt/images/xenial.qcow2 20G Command to run the install virt-install \\\n --name xenial \\\n --ram 4096 \\\n --disk path=/var/lib/libvirt/images/xenial.qcow2,size=20 \\\n --vcpus 4 \\\n --os-type linux \\\n --os-variant ubuntu16.04 \\\n --network bridge=br0 \\\n --graphics none \\\n --console pty,target_type=serial \\\n --location ./ubuntu-16.04.3-server-amd64.iso \\\n --extra-args 'console=ttyS0,115200n8 serial'",
"title": "virt-install ubuntu16.04"
},
{
"location": "/stupid_kvm_tricks/#virt-install-arch-linux",
"text": "The --extra-args option lets you use a serial console. But the --extra-args option only works if you also use an --location \noption. But the --location option can only be used with certain isos.\nSo use --cdrom instead of --location , drop the --extra-args ,\nand instruct the kernel to boot with a serial console with a parameter\nat the boot splash screen. qemu-img create -f qcow2 /var/lib/libvirt/images/arch.qcow2 20G\n\nvirt-install --name arch --ram 4096 \\\n --disk path=/var/lib/libvirt/images/arch.qcow2,size=20 \\\n --vcpus 2 \\\n --os-type linux \\\n --os-variant ubuntu16.04 \\ \n --network bridge=virbr0 \\\n --graphics none \\ \n --console pty,target_type=serial \\\n --cdrom /var/lib/libvirt/images/archlinux-2018.02.01-x86_64.iso the arch boot splash screen will appear in your terminal and you can \ntap the \"tab\" key to edit boot parameters add \"console=ttyS0\" to kernel command line parameters before > .linux boot/x86_64/vmlinuz archisobasedir=arch archisolabel=ARCH_201802 initrd=boot/intel_ucode.img,boot/x86_64/archiso.img after > .linux boot/x86_64/vmlinuz archisobasedir=arch archisolabel=ARCH_201802 initrd=boot/intel_ucode.img,boot/x86_64/archiso.img console=ttyS0 \narch boots ...\n...\n...\n...\n\nroot@archiso ~ # lsblk\nNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT\nloop0 7:0 0 432M 1 loop /run/archiso/sfs/airootfs\nsr0 11:0 1 539M 0 rom /run/archiso/bootmnt\nvda 254:0 0 20G 0 disk \nroot@archiso ~ #",
"title": "virt-install Arch Linux"
},
{
"location": "/stupid_kvm_tricks/#change-the-network-interface",
"text": "br0 gets addresses from the network router, but what if you want\nyour vm to have be on the virbr0 192.168.122.0/24 subnet? virsh edit xenial And then 'J' all the way down to the bottom, change the interface name from br0 to\nvirbr0, virsh start xenial and then look for the machine with nmap nmap -sn 192.168.122.0/24",
"title": "Change the Network Interface"
},
{
"location": "/stupid_kvm_tricks/#clone-the-vm",
"text": "In this case we don't have to pre-allocate the disk image because virt-clone will do that\nfor us. virt-clone --original xenial --name xenial-clone \\\n --file /var/lib/libvirt/images/xenial-clone.qcow2",
"title": "Clone the VM"
},
{
"location": "/stupid_kvm_tricks/#clone-the-vm-to-another-machine",
"text": "First dump the xml that defines the virtual machine. virsh dumpxml xenial > xenial.xml Then copy both xenial.xml and xenial.qcow2 to the new host machine. On the new kvm\nhost you'll want to at least make sure your vm has the correct CPU architecture.\nThe command to get a list of supported kvm cpu architectures is: virsh cpu-models <arch>\n# i.e.\nvirsh cpu-models x86_64 After you edit xenial.xml and update the correct cpu architecture, mv xenial.qcow2 \nto /var/lib/libvirt/images/ , clone it. virt-clone will handle generating new\nmac addresses for the network interfaces. <cpu mode='custom' match='exact'>\n <model fallback='allow'>Haswell-noTSX</model>\n </cpu>\n# i.e. change to above to\n <cpu mode='custom' match='exact'>\n <model fallback='allow'>SandyBridge</model>\n </cpu> virt-clone --original-xml xenial.xml --name xenial-clone \\\n --file /var/lib/libvirt/images/xenial-clone.qcow2",
"title": "Clone the VM to another Machine"
},
{
"location": "/stupid_kvm_tricks/#what-is-the-os-type-and-os-variant-type-names",
"text": "osinfo-query os",
"title": "What is the os-type and os-variant type names?"
},
{
"location": "/stupid_kvm_tricks/#misc",
"text": "Start the vm virsh start xenial List all the vms virsh list --all Stop the vm virsh destroy xenial Delete the vm virsh undefine xenial",
"title": "misc"
},
{
"location": "/stupid_kvm_tricks/#virsh-help",
"text": "The virsh help command returns a long chart of help information. But each section has\na keyword. Take for instance the command virsh help monitor . From this we\nsee the domiflist subcommand (among others). Unfortunately domifaddr doesn't seem to\nwork on the Ubuntu:16.04 host, but there are other ways to find the ip address of\na virtual machine. So now if you want to see what host interface the vm xenial is attached to,\ntype. virsh domiflist xenial which returns: Interface Type Source Model MAC\n-------------------------------------------------------\nvnet1 bridge virbr0 virtio 52:54:00:58:bf:75 So now we can find the address of virbr0 on the host machine. ifconfig virbr0 which returns: virbr0 Link encap:Ethernet HWaddr 52:54:00:38:87:38 \n inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0\n UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1\n RX packets:1351 errors:0 dropped:0 overruns:0 frame:0\n TX packets:3037 errors:0 dropped:0 overruns:0 carrier:0\n collisions:0 txqueuelen:1000 \n RX bytes:232346 (232.3 KB) TX bytes:502916 (502.9 KB) and thus we know what subnet to scan with nmap to find the ip address of the vm nmap -sn 192.168.122.0/24",
"title": "virsh help"
},
{
"location": "/stupid_kvm_tricks/#snapshots",
"text": "Create snapshot of vm dcing virsh snapshot-create-as --domain dcing --name dcing-snap0 But you don't need to name your snapshots because they are listed by time. virsh snapshot-create --domain dcing List snapshots for vm dcing virsh snapshot-list --domain dcing\n\n Name Creation Time State\n------------------------------------------------------------\n 1518366561 2018-02-11 08:29:21 -0800 shutoff\n dcing-snap0 2018-02-11 08:22:57 -0800 shutoff Revert dcing to snap0 virsh snapshot-revert --domain dcing --snapshotname dcing-snap0 Delete snapshot virsh snapshot-delete --domain dcing --snapshotname dcing-snap0",
"title": "Snapshots"
}
]
}