trentdocs_website/site/mkdocs/search_index.json

424 lines
89 KiB
JSON

{
"docs": [
{
"location": "/",
"text": "Welcome to Trent Docs\n\n\nGit Repo For These Docs\n\n\nObviously, the commit history will reflect the time when these documents are written.\n\n\n\n\nServe And Share Apps From Your Phone With Fdroid\n\n\nLXD Container Home Server Networking For Dummies\n\n\nNspawn Containers\n\n\nMastodon on Arch\n\n\nDebian Nspawn Container On Arch For Testing Apache Configurations\n\n\nDynamic Cacheing Nginx Reverse Proxy For Pacman\n\n\nFreeBSD Jails on FreeNAS\n \n\n\nQuick Dirty Redis Nspawn Container on Arch Linux\n\n\nQuick Dirty Postgresql Nspawn Container on Arch Linux\n\n\nSelf Signed Certs",
"title": "Home"
},
{
"location": "/#welcome-to-trent-docs",
"text": "",
"title": "Welcome to Trent Docs"
},
{
"location": "/#git-repo-for-these-docs",
"text": "Obviously, the commit history will reflect the time when these documents are written. Serve And Share Apps From Your Phone With Fdroid LXD Container Home Server Networking For Dummies Nspawn Containers Mastodon on Arch Debian Nspawn Container On Arch For Testing Apache Configurations Dynamic Cacheing Nginx Reverse Proxy For Pacman FreeBSD Jails on FreeNAS Quick Dirty Redis Nspawn Container on Arch Linux Quick Dirty Postgresql Nspawn Container on Arch Linux Self Signed Certs",
"title": "Git Repo For These Docs"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/",
"text": "LXD Container Home Server Networking For Dummies\n\n\nWhy?\n\n\nIf you're going to operate a fleet of LXD containers for home\nentertainment, you probably want some of them exposed with their\nown ip addresses on your home network, so that you can use them\nas containerized servers for various applications.\n\n\nOthers containers, you might want to be inaccessable from the lan,\nin a natted subnet, where they can solicit connections to the\noutside world from within their natted subnet, but are not addressable\nfrom the outside. A database server that you connect a web app to, for\ninstance, or a web app that you have a reverse proxy in front of.\n\n\nBut these are two separate address spaces, so ideally all of the containers\nwould have a second interface of their own, by which they could connect\nto a third network, that would be a private network that all of the containers\ncan use to talk directly to each other (or the host machine).\n\n\nIt's pretty straightforward, you just have to glue all the pieces together.\n\n\nThree Part Overview.\n\n\n\n\n\n\nDefine and create some bridges. \n\n\n\n\n\n\nDefine profiles that combine the network\ninterfaces in different combinations. In addition to two\nbridges you will have a macvlan with which to expose the containers\nthat you want exposed, but the macvlan doesn't come into\nplay until here in step two when you define profiles. \n\n\n\n\n\n\nAssign each container which profile it should use,\nand then configure the containers to use the included\nnetwork interfaces correctly. \n\n\n\n\n\n\nBuild Sum Moar Bridges\n\n\nThe containers will all have two network interfaces from\ntheir own internal point of view, \neth0\n and \neth1\n. \n\n\nIn this\nscheme we create a bridge for a natted subnet and a bridge for\na non-natted subnet. All of the containers will connect to the\nnon-natted subnet on their second interface, \neth1\n, and some\nof the containers will connect to the natted subnet on their \nfirst interface \neth0\n. The containers that don't connect\nto the natted subnet will instead connect to a macvlan\non their first interface \neth0\n, but that isn't part of this\nstep.\n\n\nbridge for a natted subnet\n\n\nIf you haven't used lxd before, you'll want to run the command \nlxd init\n.\nBy default this creates exactly the bridge we want, called \nlxdbr0\n.\n\n\nOtherwise you would use the following command to create \nlxdbr0\n.\n\n\nlxc network create lxdbr0\n\n\n\n\nTo generate a table of all the existing interfaces.\n\n\nlxd network list\n\n\n\n\nThis bridge is for our natted subnet, so we just want to go with\nthe default configuration.\n\n\nlxc network show lxdbr0\n\n\n\n\nThis cats a yaml file where you can see the randomly\ngenerated network for \nlxdbr0\n.\n\n\nconfig:\n ipv4.address: 10.99.153.1/24\n ipv4.nat: \"true\"\n ipv6.address: fd42:211e:e008:954b::1/64\n ipv6.nat: \"true\"\ndescription: \"\"\nname: lxdbr0\ntype: bridge\nused_by: []\nmanaged: true\n\n\n\n\nbridge for a non-natted subnet\n\n\nCreate \nlxdbr1\n\n\nlxc network create lxdbr1\n\n\n\n\nUse the following commands to remove nat from \nlxdbr1.\n\n\nlxc network set lxdbr1 ipv4.nat false\nlxc network set lxdbr1 ipv6.nat false\n\n\n\n\nOf if you use this next command, your favourite\ntext editor will pop open, preloaded with the complete yaml file\nand you can edit the configuration there.\n\n\nlxc network edit lxdbr1\n\n\n\n\nEither way you're looking for a result such as the following.\nNotice that the randomly generated address space is different\nthat the one for \nlxdbr0\n, and that the *nat keys are set\nto \"false\".\n\n\nconfig:\n ipv4.address: 10.151.18.1/24\n ipv4.nat: \"false\"\n ipv6.address: fd42:89d4:f465:1b20::1/64\n ipv6.nat: \"false\"\ndescription: \"\"\nname: lxdbr1\ntype: bridge\nused_by: []\nmanaged: true\n\n\n\n\nProfiles\n\n\nrecycle the default\n\n\nWhen you first ran \nlxd init\n, that created a default profile.\nConfirm with the following.\n\n\nlxc profile list\n\n\n\n\nTo see what the default profile looks like.\n\n\nlxc profile show default\n\n\n\n\nconfig:\n environment.http_proxy: \"\"\n security.privileged: \"true\"\n user.network_mode: \"\"\ndescription: Default LXD profile\ndevices:\n eth0:\n nictype: bridged\n parent: lxdbr0\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: default\nused_by: []\n\n\n\n\nprofile the natted\n\n\nThe easiest way to create a new profile is start by copying another one.\n\n\nlxc profile copy default natted\n\n\n\n\nedit the new \nnatted\n profile\n\n\nlxc profile edit natted\n\n\n\n\nAnd add an \neth1\n interface attached to \nlxdbr1\n. \neth0\n and \neth1\n will\nbe the interfaces visible from the container's point of view.\n\n\nconfig:\n environment.http_proxy: \"\"\n security.privileged: \"true\"\n user.network_mode: \"\"\ndescription: Natted LXD profile\ndevices:\n eth0:\n nictype: bridged\n parent: lxdbr0\n type: nic\n eth1:\n nictype: bridged\n parent: lxdbr1\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: natted\nused_by: []\n\n\n\n\nAny container assigned to the \nnatted\n profile, will have an interface \neth0\n connected\nto a natted subnet, and a second interface \neth1\n connected to a non-natted subnet, with\na static ip on which it will be able to talk directly to the other containers and the host\nmachine.\n\n\nprofile the exposed\n\n\nCreate the \nexposed\n profile\n\n\nlxc profile copy natted exposed\n\n\n\n\nand edit the new \nexposed\n profile\n\n\nlxc profile edit exposed\n\n\n\n\nchange the nictype for \neth0\n from \nbridged\n to \nmacvlan\n, and the parent should be\nthe name of the physical ethernet connection on the host machine, instead of a bridge.\n\n\nconfig:\n environment.http_proxy: \"\"\n security.privileged: \"true\"\n user.network_mode: \"\"\ndescription: Exposed LXD profile\ndevices:\n eth0:\n nictype: macvlan\n parent: eno1\n type: nic\n eth1:\n nictype: bridged\n parent: lxdbr1\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: exposed\nused_by: []\n\n\n\n\nAny container assigned to the \nexposed\n profile, will have an interface \neth0\n connected\nto a macvlan, addressable from your lan, just like any other arbitrary computer on\nyour home network, and a second interface \neth1\n connected to a non-natted subnet, with\na static ip on which it will be able to talk directly to the other containers and the host\nmachine.\n\n\nAssign Containers to Profiles and configure them to connect correctly.\n\n\nThere are a lot of different ways that a Linux instance can solicit network services. So for\nnow I will just describe a method that will work here for a lxc container from ubuntu:16.04, as\nwell as a debian stretch container from images.linuxcontainers.org.\n\n\nStart a new container and assign the profile. We'll use an arbitrary whimsical container name,\n\nquick-joey\n. This process is the same for either the \nnatted\n profile or the \nexposed\n profile.\n\n\nlxc init ubuntu:16.04 quick-joey\n# assign the profile\nlxc profile assign quick-joey exposed\n# start quick-joey\nlxc start quick-joey\n# and start a bash shell\nlxc exec quick-joey bash\n\n\n\n\nWith either an ubuntu:16.04 container, or a debian stretch container, for either the \nnatted\n or\n\nexposed\n profile, because of all the above configuration work they will automatically connect on\ntheir \neth0\n interfaces and be able to talk to the internet. You need to edit \n/etc/network/interfaces\n,\nthe main difference being what that file looks like before you edit it.\n\n\nYou need to tell these containers how to connect to the non-natted subnet on \neth1\n.\n\n\nubuntu:16.04\n\n\nIf you start a shell on an ubuntu:16.04 container, you see that \n/etc/network/interfaces\n\ndescribes the loopback device for localhost, then sources \n/etc/network/interfaces.d/*.cfg\n where\nsome magical cloud-config jazz is going on. You just want to add a static ip description for \neth1\n\nto the file \n/etc/network/interfaces\n. And obviously take care that the static ip address you assign is\nunique and on the same subnet with \nlxdbr1\n.\n\n\nReminder: the address for \nlxdbr1\n is 10.151.18.1/24, (but it will be different on your machine).\n\n\nauto lo\niface lo inet loopback\n\nsource /etc/network/interfaces.d/*.cfg\n# what you add goes below here\nauto eth1\niface eth1 inet static\n address 10.151.18.123\n netmask 255.255.255.0\n broadcast 255.255.255.255 \n network 10.151.18.0\n\n\n\n\ndebian stretch\n\n\nThe configuration for a debian stretch container is the same, except the the file\n\n/etc/network/interfaces\n will also describe eth0, but you only have to add the \ndescription for eth1.\n\n\nthe /etc/hosts file\n\n\nOnce you assign the containers static ip addresses for their \neth1\n\ninterfaces, you can use the \n/etc/hosts\n file on each container to make them\naware of where the other containers and the host machine are.\n\n\nFor instance, if you want the container \nquick-joey\n to talk directly\nto the host machine, which will be at the ip address of \nlxdbr1\n, start a shell\non the container \nquick-joey\n\n\nlxc exec quick-joey bash\n\n\n\n\nand edit \n/etc/hosts\n\n\n# /etc/hosts\n10.151.18.1 mothership\n\n\n\n\nOr you have a container named \nfat-cinderella\n, that needs to be able to talk\ndirectly \nquick-joey\n.\n\n\nlxc exec fat-cinderella bash\nvim /etc/hosts\n\n\n\n\n# /etc/hosts\n10.151.18.123 quick-joey\n\n\n\n\netcetera",
"title": "LXD Container Home Server Networking For Dummies"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#lxd-container-home-server-networking-for-dummies",
"text": "",
"title": "LXD Container Home Server Networking For Dummies"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#why",
"text": "If you're going to operate a fleet of LXD containers for home\nentertainment, you probably want some of them exposed with their\nown ip addresses on your home network, so that you can use them\nas containerized servers for various applications. Others containers, you might want to be inaccessable from the lan,\nin a natted subnet, where they can solicit connections to the\noutside world from within their natted subnet, but are not addressable\nfrom the outside. A database server that you connect a web app to, for\ninstance, or a web app that you have a reverse proxy in front of. But these are two separate address spaces, so ideally all of the containers\nwould have a second interface of their own, by which they could connect\nto a third network, that would be a private network that all of the containers\ncan use to talk directly to each other (or the host machine). It's pretty straightforward, you just have to glue all the pieces together.",
"title": "Why?"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#three-part-overview",
"text": "Define and create some bridges. Define profiles that combine the network\ninterfaces in different combinations. In addition to two\nbridges you will have a macvlan with which to expose the containers\nthat you want exposed, but the macvlan doesn't come into\nplay until here in step two when you define profiles. Assign each container which profile it should use,\nand then configure the containers to use the included\nnetwork interfaces correctly.",
"title": "Three Part Overview."
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#build-sum-moar-bridges",
"text": "The containers will all have two network interfaces from\ntheir own internal point of view, eth0 and eth1 . In this\nscheme we create a bridge for a natted subnet and a bridge for\na non-natted subnet. All of the containers will connect to the\nnon-natted subnet on their second interface, eth1 , and some\nof the containers will connect to the natted subnet on their \nfirst interface eth0 . The containers that don't connect\nto the natted subnet will instead connect to a macvlan\non their first interface eth0 , but that isn't part of this\nstep.",
"title": "Build Sum Moar Bridges"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#bridge-for-a-natted-subnet",
"text": "If you haven't used lxd before, you'll want to run the command lxd init .\nBy default this creates exactly the bridge we want, called lxdbr0 . Otherwise you would use the following command to create lxdbr0 . lxc network create lxdbr0 To generate a table of all the existing interfaces. lxd network list This bridge is for our natted subnet, so we just want to go with\nthe default configuration. lxc network show lxdbr0 This cats a yaml file where you can see the randomly\ngenerated network for lxdbr0 . config:\n ipv4.address: 10.99.153.1/24\n ipv4.nat: \"true\"\n ipv6.address: fd42:211e:e008:954b::1/64\n ipv6.nat: \"true\"\ndescription: \"\"\nname: lxdbr0\ntype: bridge\nused_by: []\nmanaged: true",
"title": "bridge for a natted subnet"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#bridge-for-a-non-natted-subnet",
"text": "Create lxdbr1 lxc network create lxdbr1 Use the following commands to remove nat from \nlxdbr1. lxc network set lxdbr1 ipv4.nat false\nlxc network set lxdbr1 ipv6.nat false Of if you use this next command, your favourite\ntext editor will pop open, preloaded with the complete yaml file\nand you can edit the configuration there. lxc network edit lxdbr1 Either way you're looking for a result such as the following.\nNotice that the randomly generated address space is different\nthat the one for lxdbr0 , and that the *nat keys are set\nto \"false\". config:\n ipv4.address: 10.151.18.1/24\n ipv4.nat: \"false\"\n ipv6.address: fd42:89d4:f465:1b20::1/64\n ipv6.nat: \"false\"\ndescription: \"\"\nname: lxdbr1\ntype: bridge\nused_by: []\nmanaged: true",
"title": "bridge for a non-natted subnet"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#profiles",
"text": "",
"title": "Profiles"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#recycle-the-default",
"text": "When you first ran lxd init , that created a default profile.\nConfirm with the following. lxc profile list To see what the default profile looks like. lxc profile show default config:\n environment.http_proxy: \"\"\n security.privileged: \"true\"\n user.network_mode: \"\"\ndescription: Default LXD profile\ndevices:\n eth0:\n nictype: bridged\n parent: lxdbr0\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: default\nused_by: []",
"title": "recycle the default"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#profile-the-natted",
"text": "The easiest way to create a new profile is start by copying another one. lxc profile copy default natted edit the new natted profile lxc profile edit natted And add an eth1 interface attached to lxdbr1 . eth0 and eth1 will\nbe the interfaces visible from the container's point of view. config:\n environment.http_proxy: \"\"\n security.privileged: \"true\"\n user.network_mode: \"\"\ndescription: Natted LXD profile\ndevices:\n eth0:\n nictype: bridged\n parent: lxdbr0\n type: nic\n eth1:\n nictype: bridged\n parent: lxdbr1\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: natted\nused_by: [] Any container assigned to the natted profile, will have an interface eth0 connected\nto a natted subnet, and a second interface eth1 connected to a non-natted subnet, with\na static ip on which it will be able to talk directly to the other containers and the host\nmachine.",
"title": "profile the natted"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#profile-the-exposed",
"text": "Create the exposed profile lxc profile copy natted exposed and edit the new exposed profile lxc profile edit exposed change the nictype for eth0 from bridged to macvlan , and the parent should be\nthe name of the physical ethernet connection on the host machine, instead of a bridge. config:\n environment.http_proxy: \"\"\n security.privileged: \"true\"\n user.network_mode: \"\"\ndescription: Exposed LXD profile\ndevices:\n eth0:\n nictype: macvlan\n parent: eno1\n type: nic\n eth1:\n nictype: bridged\n parent: lxdbr1\n type: nic\n root:\n path: /\n pool: default\n type: disk\nname: exposed\nused_by: [] Any container assigned to the exposed profile, will have an interface eth0 connected\nto a macvlan, addressable from your lan, just like any other arbitrary computer on\nyour home network, and a second interface eth1 connected to a non-natted subnet, with\na static ip on which it will be able to talk directly to the other containers and the host\nmachine.",
"title": "profile the exposed"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#assign-containers-to-profiles-and-configure-them-to-connect-correctly",
"text": "There are a lot of different ways that a Linux instance can solicit network services. So for\nnow I will just describe a method that will work here for a lxc container from ubuntu:16.04, as\nwell as a debian stretch container from images.linuxcontainers.org. Start a new container and assign the profile. We'll use an arbitrary whimsical container name, quick-joey . This process is the same for either the natted profile or the exposed profile. lxc init ubuntu:16.04 quick-joey\n# assign the profile\nlxc profile assign quick-joey exposed\n# start quick-joey\nlxc start quick-joey\n# and start a bash shell\nlxc exec quick-joey bash With either an ubuntu:16.04 container, or a debian stretch container, for either the natted or exposed profile, because of all the above configuration work they will automatically connect on\ntheir eth0 interfaces and be able to talk to the internet. You need to edit /etc/network/interfaces ,\nthe main difference being what that file looks like before you edit it. You need to tell these containers how to connect to the non-natted subnet on eth1 .",
"title": "Assign Containers to Profiles and configure them to connect correctly."
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#ubuntu1604",
"text": "If you start a shell on an ubuntu:16.04 container, you see that /etc/network/interfaces \ndescribes the loopback device for localhost, then sources /etc/network/interfaces.d/*.cfg where\nsome magical cloud-config jazz is going on. You just want to add a static ip description for eth1 \nto the file /etc/network/interfaces . And obviously take care that the static ip address you assign is\nunique and on the same subnet with lxdbr1 . Reminder: the address for lxdbr1 is 10.151.18.1/24, (but it will be different on your machine). auto lo\niface lo inet loopback\n\nsource /etc/network/interfaces.d/*.cfg\n# what you add goes below here\nauto eth1\niface eth1 inet static\n address 10.151.18.123\n netmask 255.255.255.0\n broadcast 255.255.255.255 \n network 10.151.18.0",
"title": "ubuntu:16.04"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#debian-stretch",
"text": "The configuration for a debian stretch container is the same, except the the file /etc/network/interfaces will also describe eth0, but you only have to add the \ndescription for eth1.",
"title": "debian stretch"
},
{
"location": "/lxd_container_home_server_networking_for_dummies/#the-etchosts-file",
"text": "Once you assign the containers static ip addresses for their eth1 \ninterfaces, you can use the /etc/hosts file on each container to make them\naware of where the other containers and the host machine are. For instance, if you want the container quick-joey to talk directly\nto the host machine, which will be at the ip address of lxdbr1 , start a shell\non the container quick-joey lxc exec quick-joey bash and edit /etc/hosts # /etc/hosts\n10.151.18.1 mothership Or you have a container named fat-cinderella , that needs to be able to talk\ndirectly quick-joey . lxc exec fat-cinderella bash\nvim /etc/hosts # /etc/hosts\n10.151.18.123 quick-joey etcetera",
"title": "the /etc/hosts file"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/",
"text": "Serve And Share Apps From Your Phone With Fdroid\n\n\nThis can speed up the process of updating apps on your devices, especially if fdroid is slow. \n\n\nStep 3: you are born on third base, find the menu item for \nSwap apps\n on phone one\n\n\nOpen fdroid, and navigate to the menu by touching three dots in upper right hand corner of the screen. Select \nSwap apps\n.\n\n\n\n\nStep 4: enable the repo server on phone one\n\n\nOn the next screen toggle on \nVisible via Wi-Fi\n\n\n\n\nStep 5: a small step for your android\n\n\nAt the bottom of the screen select \nSCAN QR CODE\n\n\n\n\nStep 6: choose which apps to serve from phone one\n\n\nAt the next screen \nChoose Apps\n you want to xerve I mean serve and then touch the -> right arrow to proceed\n\n\n\n\nStep 7: another small step for your android\n\n\nTouch the -> right arrow again, do it.\n\n\n\n\nOcho: <- this means step eight\n\n\nTouch the -> right arrow until you are coming here\n\n\n\nNotice you can use either a qr code or a local url, so grab one of your other phones.\n\n\nPrivacy Friendly Qr Scanner\n appears to be a good Qr scanner,\nbut of course you can key in the url by hand too.\n\n\nStep 9: find the menu item for \nRepositories\n on phone two\n\n\nOn your other phone open fdroid, navigate to menu by selecting the 3 dots in the upper right hand corner and choose \nRepositories\n\n\n\n\nStep 10: (temporarily) toggle off the remote repos on phone two\n\n\nToggle all the current repos off and then if you want to key in the new local repo url by hand touch the + plus in the upper right hand corner\n\n\n\n\nStep 11 A: key in the local repo url by hand on phone two\n\n\nAfter touching the + plus button in \nStep Ten\n on phone two, you can fill in the url address that corresponds to the photo in \nOcho\n\n\n\n\nStep 12 A: or scan in the local repo url with qr code on phone two\n\n\nIf you prefer not to key in the url by hand, on phone two touch the\nhome button and then open your qr-scanning application and scan the\nqr code on phone one, as seen in photo \nOcho\n. The qr-scanning\napp will direct you to open fdroid, and your result will be the same as\nthe photo in \nStep Eleven A\n\n\nStep 13: profit from moar faster local downloads\n\n\nOn phone two you can now download and install apps and updates from phone one, and the download speed will be much faster than from the internet.\n\n\n\n\nStep 14: how to remember all this?\n\n\nYou can bookmark.\n\n\nIn fact, you can add a shortcut icon directly to \n\nthis page\n,\non your home screen,\nas seen here with IceCat, a debranded build of the latest extended-support-release\nof FireFox for Android.\n\n\nOr you can clone \nthe git repo\n\nwhich this site automatically builds itself from.",
"title": "Serve And Share Apps From Your Phone With Fdroid"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#serve-and-share-apps-from-your-phone-with-fdroid",
"text": "This can speed up the process of updating apps on your devices, especially if fdroid is slow.",
"title": "Serve And Share Apps From Your Phone With Fdroid"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-3-you-are-born-on-third-base-find-the-menu-item-for-swap-apps-on-phone-one",
"text": "",
"title": "Step 3: you are born on third base, find the menu item for Swap apps on phone one"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#open-fdroid-and-navigate-to-the-menu-by-touching-three-dots-in-upper-right-hand-corner-of-the-screen-select-swap-apps",
"text": "",
"title": "Open fdroid, and navigate to the menu by touching three dots in upper right hand corner of the screen. Select Swap apps."
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-4-enable-the-repo-server-on-phone-one",
"text": "",
"title": "Step 4: enable the repo server on phone one"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#on-the-next-screen-toggle-on-visible-via-wi-fi",
"text": "",
"title": "On the next screen toggle on Visible via Wi-Fi"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-5-a-small-step-for-your-android",
"text": "",
"title": "Step 5: a small step for your android"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#at-the-bottom-of-the-screen-select-scan-qr-code",
"text": "",
"title": "At the bottom of the screen select SCAN QR CODE"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-6-choose-which-apps-to-serve-from-phone-one",
"text": "",
"title": "Step 6: choose which apps to serve from phone one"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#at-the-next-screen-choose-apps-you-want-to-xerve-i-mean-serve-and-then-touch-the-right-arrow-to-proceed",
"text": "",
"title": "At the next screen Choose Apps you want to xerve I mean serve and then touch the -&gt; right arrow to proceed"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-7-another-small-step-for-your-android",
"text": "",
"title": "Step 7: another small step for your android"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#touch-the-right-arrow-again-do-it",
"text": "",
"title": "Touch the -&gt; right arrow again, do it."
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#ocho-this-means-step-eight",
"text": "",
"title": "Ocho: &lt;- this means step eight"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#touch-the-right-arrow-until-you-are-coming-here",
"text": "Notice you can use either a qr code or a local url, so grab one of your other phones. Privacy Friendly Qr Scanner appears to be a good Qr scanner,\nbut of course you can key in the url by hand too.",
"title": "Touch the -&gt; right arrow until you are coming here"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-9-find-the-menu-item-for-repositories-on-phone-two",
"text": "",
"title": "Step 9: find the menu item for Repositories on phone two"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#on-your-other-phone-open-fdroid-navigate-to-menu-by-selecting-the-3-dots-in-the-upper-right-hand-corner-and-choose-repositories",
"text": "",
"title": "On your other phone open fdroid, navigate to menu by selecting the 3 dots in the upper right hand corner and choose Repositories"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-10-temporarily-toggle-off-the-remote-repos-on-phone-two",
"text": "",
"title": "Step 10: (temporarily) toggle off the remote repos on phone two"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#toggle-all-the-current-repos-off-and-then-if-you-want-to-key-in-the-new-local-repo-url-by-hand-touch-the-plus-in-the-upper-right-hand-corner",
"text": "",
"title": "Toggle all the current repos off and then if you want to key in the new local repo url by hand touch the + plus in the upper right hand corner"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-11-a-key-in-the-local-repo-url-by-hand-on-phone-two",
"text": "",
"title": "Step 11 A: key in the local repo url by hand on phone two"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#after-touching-the-plus-button-in-step-ten-on-phone-two-you-can-fill-in-the-url-address-that-corresponds-to-the-photo-in-ocho",
"text": "",
"title": "After touching the + plus button in Step Ten on phone two, you can fill in the url address that corresponds to the photo in Ocho"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-12-a-or-scan-in-the-local-repo-url-with-qr-code-on-phone-two",
"text": "If you prefer not to key in the url by hand, on phone two touch the\nhome button and then open your qr-scanning application and scan the\nqr code on phone one, as seen in photo Ocho . The qr-scanning\napp will direct you to open fdroid, and your result will be the same as\nthe photo in Step Eleven A",
"title": "Step 12 A: or scan in the local repo url with qr code on phone two"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-13-profit-from-moar-faster-local-downloads",
"text": "",
"title": "Step 13: profit from moar faster local downloads"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#on-phone-two-you-can-now-download-and-install-apps-and-updates-from-phone-one-and-the-download-speed-will-be-much-faster-than-from-the-internet",
"text": "",
"title": "On phone two you can now download and install apps and updates from phone one, and the download speed will be much faster than from the internet."
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-14-how-to-remember-all-this",
"text": "",
"title": "Step 14: how to remember all this?"
},
{
"location": "/serve_and_share_apps_from_your_phone_with_fdroid/#you-can-bookmark",
"text": "In fact, you can add a shortcut icon directly to this page ,\non your home screen,\nas seen here with IceCat, a debranded build of the latest extended-support-release\nof FireFox for Android. \nOr you can clone the git repo \nwhich this site automatically builds itself from.",
"title": "You can bookmark."
},
{
"location": "/nspawn/",
"text": "Nspawn Containers\n\n\nThis Link For Arch Linux Wiki for Nspawn Containers\n\n\nI like the idea of starting with the easy containers first.\n\n\nCreate a FileSystem\n\n\ncd /var/lib/machines\n# create a directory\nmkdir <container>\n# use pacstrap to create a file system\npacstrap -i -c -d <container> base --ignore linux\n\n\n\n\nAt this point you might want to copy over some configs to save time later.\n\n\n\n\n/etc/locale.conf\n\n\n/root/.bashrc\n\n\n/etc/locale.gen\n\n\n\n\nFirst boot and create root password\n\n\nsystemd-nspawn -b -D <container>\npasswd\n# assuming you copied over /etc/locale.gen\nlocale-gen\n# set timezone\ntimedatectl set-timezone <timezone>\n# enable network time\ntimedatectl set-ntp 1\n# enable networking\nsystemctl enable systemd-networkd\nsystemctl enable systemd-resolved\npoweroff\n# if you want to nat the container add *-n* flag\nsystemd-nspawn -b -D <container> -n\n# and to bind mount the package cache\nsystemd-nspawn -b -D <container> -n --bind=/var/cache/pacman/pkg\n\n\n\n\nNetworking\n\n\nHere's a link that skips ahead to \nAutomatically Starting the Container\n\n\nOn Arch, assuming you have systemd-networkd and systemd-resolved\nset up correctly, networking from the host end of things should\njust work.\n\nHowever on Linode it does not. What does work on Linode is to create\na bridge interface. Two files for br0 will get the job done.\n\n\n# /etc/systemd/network/50-br0.netdev\n[NetDev]\nName=br0\nKind=bridge\n\n\n\n\n# /etc/systemd/network/50-br0.netdev\n[Match]\nName=br0\n\n[Network]\nAddress=10.0.55.1/24 # arbitrarily pick a subnet range to taste\nDHCPServer=yes\nIPMasquerade=yes\n\n\n\n\nNotice how the configuration file tells systemd-networkd to offer\nDHCP service and to perform masquerade. You can modify the \nsystemd-nspawn\n\ncommand to use the bridge interface. Every container attached to this bridge\nwill be on the same subnet and able to talk to each other.\n\n\n# first restart systemd-networkd to bring up the new bridge interface\nsystemctl restart systemd-networkd\n# and add --network-bridge=br0 to systemd-nspawn command\nsystemd-nspawn -b -D <container> --network-bridge=br0 --bind=/var/cache/pacman/pkg\n\n\n\n\nAutomatically Starting the Container\n\n\nHere's a link back up to \nNetworking\n\nin case you previously skipped ahead.\n\n\nThere are two ways to automate starting the container. You can override\n\nsystemd-nspawn@.service\n or create an \nnspawn\n file. \n\n\nFirst enable machines.target\n\n\n# to override the systemd-nspawn@.service file\ncp /lib/systemd/system/systemd-nspawn@.service /etc/systemd/system/systemd-nspawn@<container>.service\n\n\n\n\nEdit \n/etc/systemd/system/systemd-nspawn@<container>.service\n to add the \nsystemd-nspawn\n options\nyou want to the \nExecStart\n command.\n\n\nOr create \n/etc/systemd/nspawn/<container>.nspawn\n\n\n# /etc/systemd/nspawn/<container>.nspawn\n[Files]\nBind=/var/cache/pacman/pkg\n\n[Network]\nBridge=br0\n\n\n\n\n# /etc/systemd/nspawn/<container>.nspawn\n[Files]\nBind=/var/cache/pacman/pkg\n\n[Network]\nVirtualEthernet=1 # this seems to be the default sometimes, though\n\n\n\n\n# in either case\nsystemctl start/enable systemd-nspawn@<container>\n# to get a shell\nmachinectl shell <container>\n# and then to get an environment\nbash\n\n\n\n\nThis would be a good time to check for network and name resolution,\nsymlink resolv.conf if need be.\n\n\nInitial Configuration Inside The Container\n\n\n# set time zone if you don't want UTC\ntimedatectl set-timezone <timezone>\n# enable ntp, networktime\ntimedatectl set-ntp 1\n# enable networking from inside the container\nsystemctl enable systemd-networkd\nsystemctl start systemd-networkd\nsystemctl enable systemd-resolved\nsystemctl start systemd-resolved\nrm /etc/resolv.conf \nln -s /run/systemd/resolve/resolv.conf /etc/\n# ping google\nping -c 3 google.com\n\n\n\n\nIf you want to change the locale\n\n\nFinal Observations\n\n\n\n\nYou can start/stop nspawn containers with \nmachinectl\n command. \n\n\nYou can start nspawn containers with \nsystemd-nspawn\n command.\n\n\nYou can configure the systemd service for a container with @nspawn.service file override\n\n\nOr you can configure an nspawn container with a dot.nspawn file\n\n\n\n\nBut in regards to the above list\nI have noticed differences in behaviour,\nin some scenarios, concerning file attributes\nfor bind mounts.\n\n\nAnother curiosity: when you have nspawn containers natted on VirtualEthernet connections,\nthey might be able to ping each other at 10.x.y.z, but not resolve each other. But they might\nbe able to resolve each other if they are all connected to the same bridge interface or nspawn\nnetwork zone, but will randomly resolve each other in any of the 10.x.y.z, 169.x.y.z,\nor fe80::....:....:....%host (ipv6 local) spaces, which would complicate configuring the containers\nto talk to each other. But I intend to look into this some more.",
"title": "Nspawn"
},
{
"location": "/nspawn/#nspawn-containers",
"text": "This Link For Arch Linux Wiki for Nspawn Containers I like the idea of starting with the easy containers first.",
"title": "Nspawn Containers"
},
{
"location": "/nspawn/#create-a-filesystem",
"text": "cd /var/lib/machines\n# create a directory\nmkdir <container>\n# use pacstrap to create a file system\npacstrap -i -c -d <container> base --ignore linux At this point you might want to copy over some configs to save time later. /etc/locale.conf /root/.bashrc /etc/locale.gen",
"title": "Create a FileSystem"
},
{
"location": "/nspawn/#first-boot-and-create-root-password",
"text": "systemd-nspawn -b -D <container>\npasswd\n# assuming you copied over /etc/locale.gen\nlocale-gen\n# set timezone\ntimedatectl set-timezone <timezone>\n# enable network time\ntimedatectl set-ntp 1\n# enable networking\nsystemctl enable systemd-networkd\nsystemctl enable systemd-resolved\npoweroff\n# if you want to nat the container add *-n* flag\nsystemd-nspawn -b -D <container> -n\n# and to bind mount the package cache\nsystemd-nspawn -b -D <container> -n --bind=/var/cache/pacman/pkg",
"title": "First boot and create root password"
},
{
"location": "/nspawn/#networking",
"text": "Here's a link that skips ahead to Automatically Starting the Container On Arch, assuming you have systemd-networkd and systemd-resolved\nset up correctly, networking from the host end of things should\njust work. \nHowever on Linode it does not. What does work on Linode is to create\na bridge interface. Two files for br0 will get the job done. # /etc/systemd/network/50-br0.netdev\n[NetDev]\nName=br0\nKind=bridge # /etc/systemd/network/50-br0.netdev\n[Match]\nName=br0\n\n[Network]\nAddress=10.0.55.1/24 # arbitrarily pick a subnet range to taste\nDHCPServer=yes\nIPMasquerade=yes Notice how the configuration file tells systemd-networkd to offer\nDHCP service and to perform masquerade. You can modify the systemd-nspawn \ncommand to use the bridge interface. Every container attached to this bridge\nwill be on the same subnet and able to talk to each other. # first restart systemd-networkd to bring up the new bridge interface\nsystemctl restart systemd-networkd\n# and add --network-bridge=br0 to systemd-nspawn command\nsystemd-nspawn -b -D <container> --network-bridge=br0 --bind=/var/cache/pacman/pkg",
"title": "Networking"
},
{
"location": "/nspawn/#automatically-starting-the-container",
"text": "Here's a link back up to Networking \nin case you previously skipped ahead. There are two ways to automate starting the container. You can override systemd-nspawn@.service or create an nspawn file. First enable machines.target # to override the systemd-nspawn@.service file\ncp /lib/systemd/system/systemd-nspawn@.service /etc/systemd/system/systemd-nspawn@<container>.service Edit /etc/systemd/system/systemd-nspawn@<container>.service to add the systemd-nspawn options\nyou want to the ExecStart command. Or create /etc/systemd/nspawn/<container>.nspawn # /etc/systemd/nspawn/<container>.nspawn\n[Files]\nBind=/var/cache/pacman/pkg\n\n[Network]\nBridge=br0 # /etc/systemd/nspawn/<container>.nspawn\n[Files]\nBind=/var/cache/pacman/pkg\n\n[Network]\nVirtualEthernet=1 # this seems to be the default sometimes, though # in either case\nsystemctl start/enable systemd-nspawn@<container>\n# to get a shell\nmachinectl shell <container>\n# and then to get an environment\nbash This would be a good time to check for network and name resolution,\nsymlink resolv.conf if need be.",
"title": "Automatically Starting the Container"
},
{
"location": "/nspawn/#initial-configuration-inside-the-container",
"text": "# set time zone if you don't want UTC\ntimedatectl set-timezone <timezone>\n# enable ntp, networktime\ntimedatectl set-ntp 1\n# enable networking from inside the container\nsystemctl enable systemd-networkd\nsystemctl start systemd-networkd\nsystemctl enable systemd-resolved\nsystemctl start systemd-resolved\nrm /etc/resolv.conf \nln -s /run/systemd/resolve/resolv.conf /etc/\n# ping google\nping -c 3 google.com If you want to change the locale",
"title": "Initial Configuration Inside The Container"
},
{
"location": "/nspawn/#final-observations",
"text": "You can start/stop nspawn containers with machinectl command. You can start nspawn containers with systemd-nspawn command. You can configure the systemd service for a container with @nspawn.service file override Or you can configure an nspawn container with a dot.nspawn file But in regards to the above list\nI have noticed differences in behaviour,\nin some scenarios, concerning file attributes\nfor bind mounts. Another curiosity: when you have nspawn containers natted on VirtualEthernet connections,\nthey might be able to ping each other at 10.x.y.z, but not resolve each other. But they might\nbe able to resolve each other if they are all connected to the same bridge interface or nspawn\nnetwork zone, but will randomly resolve each other in any of the 10.x.y.z, 169.x.y.z,\nor fe80::....:....:....%host (ipv6 local) spaces, which would complicate configuring the containers\nto talk to each other. But I intend to look into this some more.",
"title": "Final Observations"
},
{
"location": "/mastodon_on_arch/",
"text": "Some Observations About Installing Mastodon on Arch.\n\n\nNginx\n\n\nFrom the \nProduction Guide\n\nyou can copy the example nginx.conf file to \n/etc/nginx/sites-enabled/some_arbitrary.conf\n,\nand then add the following to \n/etc/nginx/nginx.conf\n in the http section,\nthis with a fresh install of nginx with the default configuration file.\n\n\n# /etc/nginx/nginx.conf \nhttp {\n include sites-enabled/*;\n}\n\n\n\n\nInstalling the Dependancies\n\n\npacman -S certbot nginx libxml2 imagemagick ffmpeg git yarn npm python2 oidentd\n\n\n\n\n# I'm guessing here\npacman -S libpqxx libxslt protobuf protobuf-c\n\n\n\n\n\n\nI'm assuming base-devel is installed\n\n\npython2 seems to be required to run \nyarn install\n command later on\n\n\noidentd seems to be a usable replacement for pident\n\n\nlibpqxx pulls in postgresql-libs\n\n\nfile is already installed\n\n\ncurl is already installed\n\n\nruby-build and rbenv are installable from aur\n\n\nalso postgresql and redis unless, those are in another container or whatever.\n\n\n\n\nOther Observations\n\n\nI discovered that between \ngem install bundler\n and\n\n\nbundle install --deployment --without development test\n,\nyou have to update your environment, with \n\neval \"$(rbenv init -)\"\n, i.e.\n\n\necho 'eval \"$(rbenv init -)\"' >> .bashrc\n# and then\n. ~/.bashrc\n\n\n\n\nYou have to update your environment more than once, during the\ninstallation.\n\n\nPresumably you don't ever want to delete the \n~/live/Public/\n directory\nif that is where assets are being stored, but it seems ok to delete \n\n~/live/node_modules\n and then rerun the \nyarn install\n command.\n\n\nIn \n~/live/.env.production\n, \nSINGLE_USER_MODE=false\n has to be set\nto \nfalse\n until at least one user is created, or the web service won't \neven start. (Also \nchmod 755 ~/\n)",
"title": "Mastodon on Arch"
},
{
"location": "/mastodon_on_arch/#some-observations-about-installing-mastodon-on-arch",
"text": "",
"title": "Some Observations About Installing Mastodon on Arch."
},
{
"location": "/mastodon_on_arch/#nginx",
"text": "From the Production Guide \nyou can copy the example nginx.conf file to /etc/nginx/sites-enabled/some_arbitrary.conf ,\nand then add the following to /etc/nginx/nginx.conf in the http section,\nthis with a fresh install of nginx with the default configuration file. # /etc/nginx/nginx.conf \nhttp {\n include sites-enabled/*;\n}",
"title": "Nginx"
},
{
"location": "/mastodon_on_arch/#installing-the-dependancies",
"text": "pacman -S certbot nginx libxml2 imagemagick ffmpeg git yarn npm python2 oidentd # I'm guessing here\npacman -S libpqxx libxslt protobuf protobuf-c I'm assuming base-devel is installed python2 seems to be required to run yarn install command later on oidentd seems to be a usable replacement for pident libpqxx pulls in postgresql-libs file is already installed curl is already installed ruby-build and rbenv are installable from aur also postgresql and redis unless, those are in another container or whatever.",
"title": "Installing the Dependancies"
},
{
"location": "/mastodon_on_arch/#other-observations",
"text": "I discovered that between gem install bundler and bundle install --deployment --without development test ,\nyou have to update your environment, with eval \"$(rbenv init -)\" , i.e. echo 'eval \"$(rbenv init -)\"' >> .bashrc\n# and then\n. ~/.bashrc You have to update your environment more than once, during the\ninstallation. Presumably you don't ever want to delete the ~/live/Public/ directory\nif that is where assets are being stored, but it seems ok to delete ~/live/node_modules and then rerun the yarn install command. In ~/live/.env.production , SINGLE_USER_MODE=false has to be set\nto false until at least one user is created, or the web service won't \neven start. (Also chmod 755 ~/ )",
"title": "Other Observations"
},
{
"location": "/debian_nspawn_container_on_arch_for_testing_apache_configurations/",
"text": "Debian Nspawn Container On Arch For Testing Apache Configurations\n\n\nBegin by exporting the environmental variable for your squid cacheing \nproxy. If you're deboostrapping Debian file systems, the best way to\nspeed this up is with squid.\n\n\nThe ArchWiki page for nspawn containers has a\n\nDebian/Ubuntu subsection\n\nObviously you're going to want to install debootstrap and debian-archive-keyring.\n\n\n# to create a Stretch Container\ncd /var/lib/machines \nmkdir <container name> \ndeboostrap stretch <container name>\n\n\n\n\nAfter some experimentation, perhaps this is the best time to write\nthe intended hostname into the container, and write any\napt-cacher or apt-cacher-ng proxies into /etc/apt/apt.conf \non the container.\n\n\ncp apt.conf /etc/apt/apt.conf \necho \"<hostname>\" > /var/lib/machines/<container name>/etc/hostname\n\n\n\n\nAnd then start the container, and set the root password.\n\n\n# boot in interactive mode\nsystemd-nspawn -D <container name>\n# set the passwd and logout\npassword \nlogout \n\n\n\n\nNow we can boot the container in non-interactive mode, either\nfrom the command line or using nspawn files. In either case \ndouble check that the your bind mounts have the correct permissions \nfrom inside the container.\n\n\n# for instance attached to a bridge interface br0 \nsystemd-nspawn -b -D <container name> --network-bridge=br0\n# or if you've set up a package cache \nsystemd-nspawn -b -D <container name> --network-bridge=br0 --bind=/var/cache/apt/archives\n\n\n\n\nAlternately, if you use an nspawn file, then you can use a command \nsimilar to the following to start it, you'll first need to \nboot the container from the command line and install dbus,\nbecause \nmachinectl shell\n and \nmachinectl login\n won't work \nwithout dbus. In this case use the following sequence of commands.\n\n\n# start the container and login as root\nsystemd-nspawn -b -D <container name> --network-bridge=br0 \n# bring up networking so you can install dbus\nsystemctl enable/start systemd-networkd\n# this is also a good time to install and configure locale\napt install dbus locales \n# to configure locale \ndpkg-reconfigure locales \npoweroff\n\n\n\n\nAfter this you can start the container with systemd, when \nusing an nspawn file.\n\n\nsystemctl start systemd-nspawn@<container name>\n\n\n\n\n# /etc/systemd/nspawn/<container name>.spawn \n[Files] \n# Bind=/var/cache/apt/archives \n\n[Network] \nbridge=br0 \n\n\n\n\nYou can use tasksel to install a web-server.\n\n\n# apache2 will immediately be listening on port 80\ntasksel install web-server\n# enable mod ssl\na2enmod ssl ; systemctl restart apache2\n# enable the default ssl test page \na2ensite default-ssl.conf ; systemctl reload apache2\n\n\n\n\nYou'll be up and running with the default self-signed certs.",
"title": "Debian Nspawn Container On Arch For Testing Apache Configurations"
},
{
"location": "/debian_nspawn_container_on_arch_for_testing_apache_configurations/#debian-nspawn-container-on-arch-for-testing-apache-configurations",
"text": "Begin by exporting the environmental variable for your squid cacheing \nproxy. If you're deboostrapping Debian file systems, the best way to\nspeed this up is with squid. The ArchWiki page for nspawn containers has a Debian/Ubuntu subsection \nObviously you're going to want to install debootstrap and debian-archive-keyring. # to create a Stretch Container\ncd /var/lib/machines \nmkdir <container name> \ndeboostrap stretch <container name> After some experimentation, perhaps this is the best time to write\nthe intended hostname into the container, and write any\napt-cacher or apt-cacher-ng proxies into /etc/apt/apt.conf \non the container. cp apt.conf /etc/apt/apt.conf \necho \"<hostname>\" > /var/lib/machines/<container name>/etc/hostname And then start the container, and set the root password. # boot in interactive mode\nsystemd-nspawn -D <container name>\n# set the passwd and logout\npassword \nlogout Now we can boot the container in non-interactive mode, either\nfrom the command line or using nspawn files. In either case \ndouble check that the your bind mounts have the correct permissions \nfrom inside the container. # for instance attached to a bridge interface br0 \nsystemd-nspawn -b -D <container name> --network-bridge=br0\n# or if you've set up a package cache \nsystemd-nspawn -b -D <container name> --network-bridge=br0 --bind=/var/cache/apt/archives Alternately, if you use an nspawn file, then you can use a command \nsimilar to the following to start it, you'll first need to \nboot the container from the command line and install dbus,\nbecause machinectl shell and machinectl login won't work \nwithout dbus. In this case use the following sequence of commands. # start the container and login as root\nsystemd-nspawn -b -D <container name> --network-bridge=br0 \n# bring up networking so you can install dbus\nsystemctl enable/start systemd-networkd\n# this is also a good time to install and configure locale\napt install dbus locales \n# to configure locale \ndpkg-reconfigure locales \npoweroff After this you can start the container with systemd, when \nusing an nspawn file. systemctl start systemd-nspawn@<container name> # /etc/systemd/nspawn/<container name>.spawn \n[Files] \n# Bind=/var/cache/apt/archives \n\n[Network] \nbridge=br0 You can use tasksel to install a web-server. # apache2 will immediately be listening on port 80\ntasksel install web-server\n# enable mod ssl\na2enmod ssl ; systemctl restart apache2\n# enable the default ssl test page \na2ensite default-ssl.conf ; systemctl reload apache2 You'll be up and running with the default self-signed certs.",
"title": "Debian Nspawn Container On Arch For Testing Apache Configurations"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/",
"text": "Dynamic Cacheing Nginx Reverse Proxy For Pacman\n\n\nYou set up a dynamic cacheing reverse proxy and then you put the ip address or hostname for that server in \n/etc/pacman.d/mirrorlist\n on your client machines.\n\n\nOf course if you want to you can set this up and run it in an\n\nNspawn Container\n.\nThe \nArchWiki Page for pacman tips\n\nmostly spells out what to do, but I want to document\nthe exact steps I would take.\n\n\nAs for how you would run this on a server with other virtual hosts?\nWho cares? That is what is so brilliant about using using an\nnspawn container, in that it behaves like just another\ncomputer on the lan with it's own ip address. But it only does one\nthing, and that's all you have to configure it for.\n\n\nI see no reason to use nginx-mainline instead of stable.\n\n\npacman -S nginx\n\n\n\n\nThe suggested configuration in the Arch Wiki\nis to create a directory \n/srv/http/pacman-cache\n,\nand that seems to work well enough\n\n\nmkdir /srv/http/pacman-cache\n# and then change it's ownershipt\nchown http:http /srv/http/pacman-cache\n\n\n\n\nnginx configuration\n\n\nand then it references an nginx.conf in\n\nthis gist\n,\nbut that is not a complete nginx.conf and so here is a method to get that\nworking as of July 2017 with a fresh install of nginx.\n\n\nYou can start with a default \n/etc/nginx/nginx.conf\n,\nand add the line \ninclude sites-enabled/*;\n\nat the end of the \nhttp\n section.\n\n\n# /etc/nginx/nginx.conf\n#user html;\nworker_processes 1;\n\n#error_log logs/error.log;\n#error_log logs/error.log notice;\n#error_log logs/error.log info;\n\n#pid logs/nginx.pid;\n\n\nevents {\n worker_connections 1024;\n}\n\n\nhttp {\n include mime.types;\n default_type application/octet-stream;\n\n #log_format main '$remote_addr - $remote_user [$time_local] \"$request\" '\n # '$status $body_bytes_sent \"$http_referer\" '\n # '\"$http_user_agent\" \"$http_x_forwarded_for\"';\n\n #access_log logs/access.log main;\n\n sendfile on;\n #tcp_nopush on;\n\n #keepalive_timeout 0;\n keepalive_timeout 65;\n\n #gzip on;\n\n server {\n listen 80;\n server_name localhost;\n\n #charset koi8-r;\n\n #access_log logs/host.access.log main;\n\n location / {\n root /usr/share/nginx/html;\n index index.html index.htm;\n }\n\n #error_page 404 /404.html;\n\n # redirect server error pages to the static page /50x.html\n #\n error_page 500 502 503 504 /50x.html;\n location = /50x.html {\n root /usr/share/nginx/html;\n }\n\n # proxy the PHP scripts to Apache listening on 127.0.0.1:80\n #\n #location ~ \\.php$ {\n # proxy_pass http://127.0.0.1;\n #}\n\n # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000\n #\n #location ~ \\.php$ {\n # root html;\n # fastcgi_pass 127.0.0.1:9000;\n # fastcgi_index index.php;\n # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;\n # include fastcgi_params;\n #}\n\n # deny access to .htaccess files, if Apache's document root\n # concurs with nginx's one\n #\n #location ~ /\\.ht {\n # deny all;\n #}\n }\n\n\n # another virtual host using mix of IP-, name-, and port-based configuration\n #\n #server {\n # listen 8000;\n # listen somename:8080;\n # server_name somename alias another.alias;\n\n # location / {\n # root html;\n # index index.html index.htm;\n # }\n #}\n\n\n # HTTPS server\n #\n #server {\n # listen 443 ssl;\n # server_name localhost;\n\n # ssl_certificate cert.pem;\n # ssl_certificate_key cert.key;\n\n # ssl_session_cache shared:SSL:1m;\n # ssl_session_timeout 5m;\n\n # ssl_ciphers HIGH:!aNULL:!MD5;\n # ssl_prefer_server_ciphers on;\n\n # location / {\n # root html;\n # index index.html index.htm;\n # }\n #}\n include sites-enabled/*;\n\n}\n\n\n\n\nAnd then create the directory \n/etc/nginx/sites-enabled\n\n\nmkdir /etc/nginx/sites-enabled\n\n\n\n\nAnd then create \n/etc/nginx/sites-enabled/proxy_cache.conf\n,\nwhich is \nmostly\n a\n\ncopy-and-paste from this gist\n.\n\n\nNotice the \nserver_name\n. This has to match the entry in\n\n/etc/pacman.d/mirrorlist\n on the client machines you are\nupdating from. If you can use the hostname, great. But if you\nhave to assign static ip addresses and explicitly write the local\nip address instead, then that should match what you write in your mirrorlist.\n\n\nAnd of course your mirrorlist entry\non the client machine, has to preserve the directory scheme.\n\n\n# /etc/pacman.d/mirrorlist\nServer = http://<hostname or ip address>:<port if not 80>/archlinux/$repo/os/$arch\n\n\n\n\n# /etc/nginx/sites-enabled/proxy_cache.conf\n# nginx may need to resolve domain names at run time\nresolver 8.8.8.8 8.8.4.4;\n\n# Pacman Cache\nserver\n{\nlisten 80;\nserver_name <hostname or ip address>; # has to match the entry in mirrorlist on client machine.\nroot /srv/http/pacman-cache;\nautoindex on;\n\n # Requests for package db and signature files should redirect upstream without caching\n # Well that's the default anyway.\n # But what if you're spinning up a lot of nspawn containers, don't want to waste all that bandwidth?\n # I choose to instead run a systemd timer that deletes the *db files once every 15 minutes\n location ~ \\.(db|sig)$ {\n try_files $uri @pkg_mirror;\n # proxy_pass http://mirrors$request_uri;\n }\n\n # Requests for actual packages should be served directly from cache if available.\n # If not available, retrieve and save the package from an upstream mirror.\n location ~ \\.tar\\.xz$ {\n try_files $uri @pkg_mirror;\n }\n\n # Retrieve package from upstream mirrors and cache for future requests\n location @pkg_mirror {\n proxy_store on;\n proxy_redirect off;\n proxy_store_access user:rw group:rw all:r;\n proxy_next_upstream error timeout http_404;\n proxy_pass http://mirrors$request_uri;\n }\n}\n\n# Upstream Arch Linux Mirrors\n# - Configure as many backend mirrors as you want in the blocks below\n# - Servers are used in a round-robin fashion by nginx\n# - Add \"backup\" if you want to only use the mirror upon failure of the other mirrors\n# - Separate \"server\" configurations are required for each upstream mirror so we can set the \"Host\" header appropriately\nupstream mirrors {\nserver localhost:8001;\nserver localhost:8002; # backup\nserver localhost:8003; # backup\n}\n\n# Arch Mirror 1 Proxy Configuration\nserver\n{\nlisten 8001;\nserver_name localhost;\n\n location / {\n proxy_pass http://mirrors.kernel.org$request_uri;\n proxy_set_header Host mirrors.kernel.org;\n }\n}\n\n# Arch Mirror 2 Proxy Configuration\nserver\n{\nlisten 8002;\nserver_name localhost;\n\n location / {\n proxy_pass http://mirrors.ocf.berkeley.edu$request_uri;\n proxy_set_header Host mirrors.ocf.berkeley.edu;\n }\n}\n\n# Arch Mirror 3 Proxy Configuration\nserver\n{\n listen 8003;\n server_name localhost;\n\n location / {\n proxy_pass http://mirrors.cat.pdx.edu$request_uri;\n proxy_set_header Host mirrors.cat.pdx.edu;\n }\n}\n\n\n\n\nsystemd service that cleans the proxy cache\n\n\ndon't enable the service, enable the timer\n\n\nsystemctl enable/start /etc/systemd/system/proxy_cache_clean.timer\n\n\n\n\nKeeps the 2 most recent versions of each package using paccache command.\n\n\n# /etc/systemd/system/proxy_cache_clean.service\n[Unit]\nDescription=Clean The pacman proxy cache\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/find /srv/http/pacman-cache/ -type d -exec /usr/bin/paccache -v -r -k 2 -c {} \\;\nStandardOutput=syslog\nStandardError=syslog\n\n\n\n\nsystemd timer for the systemd service that cleans the proxy cache\n\n\n# /etc/systemd/system/proxy_cache_clean.timer\n[Unit]\nDescription=Timer for clean The pacman proxy cache\n\n[Timer]\nOnBootSec=20min\nOnUnitActiveSec=100h\nUnit=proxy_cache_clean.service\n\n[Install]\nWantedBy=timers.target\n\n\n\n\nsystemd service that deletes the pacman database files from the proxy cache\n\n\ndon't enable the service, enable the timer\n\n\nsystemctl enable/start /etc/systemd/system/proxy_cache_database_clean.timer\n\n\n\n\nYou won't need this if you don't cache the database files. But if you do cache\nthe database files, then you'll just be stuck with old database files, unless\nyou periodically delete them. But I'm not sure about all this, will keep an\neye on things.\n\n\n# /etc/systemd/system/proxy_cache_database_clean.service\n[Unit]\nDescription=Clean The pacman proxy cache database\n\n[Service]\nType=oneshot\nExecStart=/bin/bash -c \"for f in $(find /srv -name *db) ; do rm $f; done\"\nStandardOutput=syslog\nStandardError=syslog\n\n\n\n\nsystemd timer for the systemd service that deletes the pacman database files from the proxy cache\n\n\n# /etc/systemd/system/proxy_cache_database_clean.timer\n[Unit]\nDescription=Timer for clean The pacman proxy cache database\n\n[Timer]\nOnBootSec=10min\nOnUnitActiveSec=15min\nUnit=proxy_cache_database_clean.service\n\n[Install]\nWantedBy=timers.target",
"title": "Dynamic Cacheing Nginx Reverse Proxy For Pacman"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#dynamic-cacheing-nginx-reverse-proxy-for-pacman",
"text": "",
"title": "Dynamic Cacheing Nginx Reverse Proxy For Pacman"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#you-set-up-a-dynamic-cacheing-reverse-proxy-and-then-you-put-the-ip-address-or-hostname-for-that-server-in-etcpacmandmirrorlist-on-your-client-machines",
"text": "Of course if you want to you can set this up and run it in an Nspawn Container .\nThe ArchWiki Page for pacman tips \nmostly spells out what to do, but I want to document\nthe exact steps I would take. As for how you would run this on a server with other virtual hosts?\nWho cares? That is what is so brilliant about using using an\nnspawn container, in that it behaves like just another\ncomputer on the lan with it's own ip address. But it only does one\nthing, and that's all you have to configure it for. I see no reason to use nginx-mainline instead of stable. pacman -S nginx The suggested configuration in the Arch Wiki\nis to create a directory /srv/http/pacman-cache ,\nand that seems to work well enough mkdir /srv/http/pacman-cache\n# and then change it's ownershipt\nchown http:http /srv/http/pacman-cache",
"title": "You set up a dynamic cacheing reverse proxy and then you put the ip address or hostname for that server in /etc/pacman.d/mirrorlist on your client machines."
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#nginx-configuration",
"text": "and then it references an nginx.conf in this gist ,\nbut that is not a complete nginx.conf and so here is a method to get that\nworking as of July 2017 with a fresh install of nginx. You can start with a default /etc/nginx/nginx.conf ,\nand add the line include sites-enabled/*; \nat the end of the http section. # /etc/nginx/nginx.conf\n#user html;\nworker_processes 1;\n\n#error_log logs/error.log;\n#error_log logs/error.log notice;\n#error_log logs/error.log info;\n\n#pid logs/nginx.pid;\n\n\nevents {\n worker_connections 1024;\n}\n\n\nhttp {\n include mime.types;\n default_type application/octet-stream;\n\n #log_format main '$remote_addr - $remote_user [$time_local] \"$request\" '\n # '$status $body_bytes_sent \"$http_referer\" '\n # '\"$http_user_agent\" \"$http_x_forwarded_for\"';\n\n #access_log logs/access.log main;\n\n sendfile on;\n #tcp_nopush on;\n\n #keepalive_timeout 0;\n keepalive_timeout 65;\n\n #gzip on;\n\n server {\n listen 80;\n server_name localhost;\n\n #charset koi8-r;\n\n #access_log logs/host.access.log main;\n\n location / {\n root /usr/share/nginx/html;\n index index.html index.htm;\n }\n\n #error_page 404 /404.html;\n\n # redirect server error pages to the static page /50x.html\n #\n error_page 500 502 503 504 /50x.html;\n location = /50x.html {\n root /usr/share/nginx/html;\n }\n\n # proxy the PHP scripts to Apache listening on 127.0.0.1:80\n #\n #location ~ \\.php$ {\n # proxy_pass http://127.0.0.1;\n #}\n\n # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000\n #\n #location ~ \\.php$ {\n # root html;\n # fastcgi_pass 127.0.0.1:9000;\n # fastcgi_index index.php;\n # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;\n # include fastcgi_params;\n #}\n\n # deny access to .htaccess files, if Apache's document root\n # concurs with nginx's one\n #\n #location ~ /\\.ht {\n # deny all;\n #}\n }\n\n\n # another virtual host using mix of IP-, name-, and port-based configuration\n #\n #server {\n # listen 8000;\n # listen somename:8080;\n # server_name somename alias another.alias;\n\n # location / {\n # root html;\n # index index.html index.htm;\n # }\n #}\n\n\n # HTTPS server\n #\n #server {\n # listen 443 ssl;\n # server_name localhost;\n\n # ssl_certificate cert.pem;\n # ssl_certificate_key cert.key;\n\n # ssl_session_cache shared:SSL:1m;\n # ssl_session_timeout 5m;\n\n # ssl_ciphers HIGH:!aNULL:!MD5;\n # ssl_prefer_server_ciphers on;\n\n # location / {\n # root html;\n # index index.html index.htm;\n # }\n #}\n include sites-enabled/*;\n\n} And then create the directory /etc/nginx/sites-enabled mkdir /etc/nginx/sites-enabled And then create /etc/nginx/sites-enabled/proxy_cache.conf ,\nwhich is mostly a copy-and-paste from this gist . Notice the server_name . This has to match the entry in /etc/pacman.d/mirrorlist on the client machines you are\nupdating from. If you can use the hostname, great. But if you\nhave to assign static ip addresses and explicitly write the local\nip address instead, then that should match what you write in your mirrorlist. And of course your mirrorlist entry\non the client machine, has to preserve the directory scheme. # /etc/pacman.d/mirrorlist\nServer = http://<hostname or ip address>:<port if not 80>/archlinux/$repo/os/$arch # /etc/nginx/sites-enabled/proxy_cache.conf\n# nginx may need to resolve domain names at run time\nresolver 8.8.8.8 8.8.4.4;\n\n# Pacman Cache\nserver\n{\nlisten 80;\nserver_name <hostname or ip address>; # has to match the entry in mirrorlist on client machine.\nroot /srv/http/pacman-cache;\nautoindex on;\n\n # Requests for package db and signature files should redirect upstream without caching\n # Well that's the default anyway.\n # But what if you're spinning up a lot of nspawn containers, don't want to waste all that bandwidth?\n # I choose to instead run a systemd timer that deletes the *db files once every 15 minutes\n location ~ \\.(db|sig)$ {\n try_files $uri @pkg_mirror;\n # proxy_pass http://mirrors$request_uri;\n }\n\n # Requests for actual packages should be served directly from cache if available.\n # If not available, retrieve and save the package from an upstream mirror.\n location ~ \\.tar\\.xz$ {\n try_files $uri @pkg_mirror;\n }\n\n # Retrieve package from upstream mirrors and cache for future requests\n location @pkg_mirror {\n proxy_store on;\n proxy_redirect off;\n proxy_store_access user:rw group:rw all:r;\n proxy_next_upstream error timeout http_404;\n proxy_pass http://mirrors$request_uri;\n }\n}\n\n# Upstream Arch Linux Mirrors\n# - Configure as many backend mirrors as you want in the blocks below\n# - Servers are used in a round-robin fashion by nginx\n# - Add \"backup\" if you want to only use the mirror upon failure of the other mirrors\n# - Separate \"server\" configurations are required for each upstream mirror so we can set the \"Host\" header appropriately\nupstream mirrors {\nserver localhost:8001;\nserver localhost:8002; # backup\nserver localhost:8003; # backup\n}\n\n# Arch Mirror 1 Proxy Configuration\nserver\n{\nlisten 8001;\nserver_name localhost;\n\n location / {\n proxy_pass http://mirrors.kernel.org$request_uri;\n proxy_set_header Host mirrors.kernel.org;\n }\n}\n\n# Arch Mirror 2 Proxy Configuration\nserver\n{\nlisten 8002;\nserver_name localhost;\n\n location / {\n proxy_pass http://mirrors.ocf.berkeley.edu$request_uri;\n proxy_set_header Host mirrors.ocf.berkeley.edu;\n }\n}\n\n# Arch Mirror 3 Proxy Configuration\nserver\n{\n listen 8003;\n server_name localhost;\n\n location / {\n proxy_pass http://mirrors.cat.pdx.edu$request_uri;\n proxy_set_header Host mirrors.cat.pdx.edu;\n }\n}",
"title": "nginx configuration"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#systemd-service-that-cleans-the-proxy-cache",
"text": "",
"title": "systemd service that cleans the proxy cache"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#dont-enable-the-service-enable-the-timer",
"text": "systemctl enable/start /etc/systemd/system/proxy_cache_clean.timer Keeps the 2 most recent versions of each package using paccache command. # /etc/systemd/system/proxy_cache_clean.service\n[Unit]\nDescription=Clean The pacman proxy cache\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/find /srv/http/pacman-cache/ -type d -exec /usr/bin/paccache -v -r -k 2 -c {} \\;\nStandardOutput=syslog\nStandardError=syslog",
"title": "don't enable the service, enable the timer"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#systemd-timer-for-the-systemd-service-that-cleans-the-proxy-cache",
"text": "# /etc/systemd/system/proxy_cache_clean.timer\n[Unit]\nDescription=Timer for clean The pacman proxy cache\n\n[Timer]\nOnBootSec=20min\nOnUnitActiveSec=100h\nUnit=proxy_cache_clean.service\n\n[Install]\nWantedBy=timers.target",
"title": "systemd timer for the systemd service that cleans the proxy cache"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#systemd-service-that-deletes-the-pacman-database-files-from-the-proxy-cache",
"text": "",
"title": "systemd service that deletes the pacman database files from the proxy cache"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#dont-enable-the-service-enable-the-timer_1",
"text": "systemctl enable/start /etc/systemd/system/proxy_cache_database_clean.timer You won't need this if you don't cache the database files. But if you do cache\nthe database files, then you'll just be stuck with old database files, unless\nyou periodically delete them. But I'm not sure about all this, will keep an\neye on things. # /etc/systemd/system/proxy_cache_database_clean.service\n[Unit]\nDescription=Clean The pacman proxy cache database\n\n[Service]\nType=oneshot\nExecStart=/bin/bash -c \"for f in $(find /srv -name *db) ; do rm $f; done\"\nStandardOutput=syslog\nStandardError=syslog",
"title": "don't enable the service, enable the timer"
},
{
"location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#systemd-timer-for-the-systemd-service-that-deletes-the-pacman-database-files-from-the-proxy-cache",
"text": "# /etc/systemd/system/proxy_cache_database_clean.timer\n[Unit]\nDescription=Timer for clean The pacman proxy cache database\n\n[Timer]\nOnBootSec=10min\nOnUnitActiveSec=15min\nUnit=proxy_cache_database_clean.service\n\n[Install]\nWantedBy=timers.target",
"title": "systemd timer for the systemd service that deletes the pacman database files from the proxy cache"
},
{
"location": "/freebsd_jails_on_freenas/",
"text": "FreeBSD Jails on FreeNAS\n\n\nMostly a personal distillation for getting a FreeBSD\nJail up and running on FreeNAS.\n\n\nIn The FreeNAS WebGui, Create A New Jail\n\n\nThe default networking configuration, will give\nyour jail an ip address on the lan. For now, I've\ndecided to just share a pkg cache with each jail.\nNavigate to \nJails -> Storage -> Add Storage\n and\nadd the \npkg\n storage directory to \n/var/cache/pkg\n\ninside the jail. \n\n\nFor instance, on my local FreeNAS server,\nthe pkg directory is at /mnt/VolumeOne/pkg/.\n\n\nIf you ssh into the host server, you can type the command\n\njls\n, to list the jails. Based on the output of the\ncommand \njls\n, you can get a shell with \njexec <jail number>\n\nof \njexec <jail hostname>\n.\n\n\nupdating\n\n\nHow about the command \npkg audit -F\n? Downloads a\nlist of known security issues and checks your system\nagainst that.\n\n\nI would recommend, to myself anyway, to shell into\nthe new jail with \njexec\n, run \npkg upgrade\n to install any new packages,\nand then from the FreeNAS webgui, restart the jail. Although\nthe restarted jail will have a new jail number as reported by\nthe \njls\n command.\n\n\nlocale\n\n\nWhen you use \njexec\n to get a shell, you get an environment\nwith an utf_8 locale. Not so if you ssh into the new jail.\nFor this put the following contents into ~/.login_conf\n\n\n# ~/.login_conf\nme:\\\n :charset=UTF-8:\\\n :lang=en_US.UTF-8:\\\n :setenv=LC_COLLATE=C:\n\n\n\n\nssh\n\n\nTo get ssh running, edit \n/etc/rc.conf\n inside the jail.\n\n\n# /etc/rc.conf\nsshd_enable=\"YES\"\n\n\n\n\nTo start sshd immediately, make any necessary edits to\n/etc/ssh/sshd_config, and run the following command.\n\n\nservice sshd start\n\n\n\n\nByobu\n\n\nYou'll need newt to configure byobu, and if you don't install tmux\nthen screen will become the backend.\n\n\npkg install byobu tmux newt\n\n\n\n\nIf you execute \nbyobu-config\n, by pressing \nf9\n, the\nfollowing options seem to work. Some options, of course,\nwill prevent others from working so you have to enable them\none at a time to see what happens.\n\n\n\n\ndate\n\n\ndisk\n\n\ndistro\n\n\nhostname\n\n\nip address\n\n\nload_average\n\n\nlogo\n\n\ntime\n\n\nuptime\n\n\nusers\n\n\nwhoami\n\n\n\n\nvim\n\n\nVia pkg, there are two options: vim and vim-lite. Note vim will pull\nin a whole bunch of gui dependancies, but vim-lite is not build with python.\n\n\nFor instance, powerline will not work with vim-lite because it's not built with\npython. Also, vim-youcompleteme will not work with vim-lite. However, lightline\nwill work with vim-lite, and VimCompletesMe will work with vim-lite.\n\n\nTo get lightline working update $TERM\n\n\n# ~/.config/fish/config.fish\nexport TERM=xterm-256color\n\n\n\n\nAnd vimrc\n\n\n# ~/.vimrc\nset ls=2\n\n\n\n\nAnother option is to build vim from source via ports. You can prevent vim\nfrom pulling in a bunch of gui dependancies with the following in /etc/make.conf.\n\n\n# /etc/make.conf\nWITHOUT_X11=yes\n\n\n\n\nAnd then when you compile vim from ports, run \nmake config\n where you can enable\npython.\n\n\npython\n\n\nFor python3 virtualenv\n\n\nvirtualenv-3.6 <directory>\n\n\n\n\nrunning gitit under the supervision of supervisord\n\n\npy27-supervisor and hs-gitit are available as pkg install, if you want to\nrun a gitit wiki.\n\n\ngitit doesn't come with an init service. To generate a sample config,\nrun \ngitit --print-default-config > gitit.conf\n, and then if you want\nyou can reference gitit.conf by passing gitit the \n-f\n flag.\n\n\nSo for instance, after you install supervisord, add something like the\nfollowing to the end of \n/usr/local/etc/supervisord.conf\n, and create\nthe directory \n/var/log/supervisor/\n.\n\n\n[program:gitit]\nuser=<user>\ndirectory=/path/to/wikidata/directory/\ncommand=/usr/local/bin/gitit -f /usr/local/etc/gitit.conf\nstdout_logfile=/var/log/supervisor/%(program_name)s.log\nstderr_logfile=/var/log/supervisor/%(program_name)s.log\nautorestart=true\n\n\n\n\nsupervisord is a service you can enable in\n\n/etc/rc.conf\n\n\n# /etc/rc.conf\nsupervisord_enable=\"YES\"\n\n\n\n\nand then start with \nservice supervisord start\n\nwhen you get supervisord running, you can start a\nsupervisorctl shell, i.e.\n\n\nsupervisorctl\nsupervisor> status\n# outputs\ngitit RUNNING pid 98057, uptime 0:32:27\nsupervisor> start/restart/stop gitit\nsupervisor> exit\n\n\n\n\nBut there is one other little detail, in that when you try to\nrun gitit as a daemon like this, on FreeBSD it will fail because it can't\nfind git. But the symlink solution is easy enough.\n\n\nln -s /usr/local/bin/git /usr/bin/\n\n\n\n\nAnd you might as well stick a reverse proxy in front of it. Assuming\nyou configure gitit listen only on localhost:5001, install nginx.\n\npkg install nginx\n\n\nenable nginx in /etc/rc.conf\n\n\nnginx_enable=\"YES\"\n\n\n\n\nThen, in the file \n/usr/local/etc/nginx/nginx.conf\n change the location \"\n/\n\"\nso that it looks like this.\n\n\n{\n.....\n location / {\n # root /usr/local/www/nginx;\n # index index.html index.htm;\n proxy_pass http://127.0.0.1:5001;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n }\n....\n}\n\n\n\n\nand then start nginx \nservice nginx start",
"title": "FreeBSD Jails on FreeNAS"
},
{
"location": "/freebsd_jails_on_freenas/#freebsd-jails-on-freenas",
"text": "Mostly a personal distillation for getting a FreeBSD\nJail up and running on FreeNAS.",
"title": "FreeBSD Jails on FreeNAS"
},
{
"location": "/freebsd_jails_on_freenas/#in-the-freenas-webgui-create-a-new-jail",
"text": "The default networking configuration, will give\nyour jail an ip address on the lan. For now, I've\ndecided to just share a pkg cache with each jail.\nNavigate to Jails -> Storage -> Add Storage and\nadd the pkg storage directory to /var/cache/pkg \ninside the jail. For instance, on my local FreeNAS server,\nthe pkg directory is at /mnt/VolumeOne/pkg/. If you ssh into the host server, you can type the command jls , to list the jails. Based on the output of the\ncommand jls , you can get a shell with jexec <jail number> \nof jexec <jail hostname> .",
"title": "In The FreeNAS WebGui, Create A New Jail"
},
{
"location": "/freebsd_jails_on_freenas/#updating",
"text": "How about the command pkg audit -F ? Downloads a\nlist of known security issues and checks your system\nagainst that. I would recommend, to myself anyway, to shell into\nthe new jail with jexec , run pkg upgrade to install any new packages,\nand then from the FreeNAS webgui, restart the jail. Although\nthe restarted jail will have a new jail number as reported by\nthe jls command.",
"title": "updating"
},
{
"location": "/freebsd_jails_on_freenas/#locale",
"text": "When you use jexec to get a shell, you get an environment\nwith an utf_8 locale. Not so if you ssh into the new jail.\nFor this put the following contents into ~/.login_conf # ~/.login_conf\nme:\\\n :charset=UTF-8:\\\n :lang=en_US.UTF-8:\\\n :setenv=LC_COLLATE=C:",
"title": "locale"
},
{
"location": "/freebsd_jails_on_freenas/#ssh",
"text": "To get ssh running, edit /etc/rc.conf inside the jail. # /etc/rc.conf\nsshd_enable=\"YES\" To start sshd immediately, make any necessary edits to\n/etc/ssh/sshd_config, and run the following command. service sshd start",
"title": "ssh"
},
{
"location": "/freebsd_jails_on_freenas/#byobu",
"text": "You'll need newt to configure byobu, and if you don't install tmux\nthen screen will become the backend. pkg install byobu tmux newt If you execute byobu-config , by pressing f9 , the\nfollowing options seem to work. Some options, of course,\nwill prevent others from working so you have to enable them\none at a time to see what happens. date disk distro hostname ip address load_average logo time uptime users whoami",
"title": "Byobu"
},
{
"location": "/freebsd_jails_on_freenas/#vim",
"text": "Via pkg, there are two options: vim and vim-lite. Note vim will pull\nin a whole bunch of gui dependancies, but vim-lite is not build with python. For instance, powerline will not work with vim-lite because it's not built with\npython. Also, vim-youcompleteme will not work with vim-lite. However, lightline\nwill work with vim-lite, and VimCompletesMe will work with vim-lite. To get lightline working update $TERM # ~/.config/fish/config.fish\nexport TERM=xterm-256color And vimrc # ~/.vimrc\nset ls=2 Another option is to build vim from source via ports. You can prevent vim\nfrom pulling in a bunch of gui dependancies with the following in /etc/make.conf. # /etc/make.conf\nWITHOUT_X11=yes And then when you compile vim from ports, run make config where you can enable\npython.",
"title": "vim"
},
{
"location": "/freebsd_jails_on_freenas/#python",
"text": "For python3 virtualenv virtualenv-3.6 <directory>",
"title": "python"
},
{
"location": "/freebsd_jails_on_freenas/#running-gitit-under-the-supervision-of-supervisord",
"text": "py27-supervisor and hs-gitit are available as pkg install, if you want to\nrun a gitit wiki. gitit doesn't come with an init service. To generate a sample config,\nrun gitit --print-default-config > gitit.conf , and then if you want\nyou can reference gitit.conf by passing gitit the -f flag. So for instance, after you install supervisord, add something like the\nfollowing to the end of /usr/local/etc/supervisord.conf , and create\nthe directory /var/log/supervisor/ . [program:gitit]\nuser=<user>\ndirectory=/path/to/wikidata/directory/\ncommand=/usr/local/bin/gitit -f /usr/local/etc/gitit.conf\nstdout_logfile=/var/log/supervisor/%(program_name)s.log\nstderr_logfile=/var/log/supervisor/%(program_name)s.log\nautorestart=true supervisord is a service you can enable in /etc/rc.conf # /etc/rc.conf\nsupervisord_enable=\"YES\" and then start with service supervisord start \nwhen you get supervisord running, you can start a\nsupervisorctl shell, i.e. supervisorctl\nsupervisor> status\n# outputs\ngitit RUNNING pid 98057, uptime 0:32:27\nsupervisor> start/restart/stop gitit\nsupervisor> exit But there is one other little detail, in that when you try to\nrun gitit as a daemon like this, on FreeBSD it will fail because it can't\nfind git. But the symlink solution is easy enough. ln -s /usr/local/bin/git /usr/bin/ And you might as well stick a reverse proxy in front of it. Assuming\nyou configure gitit listen only on localhost:5001, install nginx. pkg install nginx enable nginx in /etc/rc.conf nginx_enable=\"YES\" Then, in the file /usr/local/etc/nginx/nginx.conf change the location \" / \"\nso that it looks like this. {\n.....\n location / {\n # root /usr/local/www/nginx;\n # index index.html index.htm;\n proxy_pass http://127.0.0.1:5001;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n }\n....\n} and then start nginx service nginx start",
"title": "running gitit under the supervision of supervisord"
},
{
"location": "/arch_redis_nspawn/",
"text": "Quick Dirty Redis Nspawn Container on Arch Linux\n\n\nRefer to the \nNspawn\n page for setting up the nspawn container,\ninstall redis, and start/enable redis.service.\nOnce you have the container running, it seems all you have to do to get\nthings working in a container subnet is to change the bind address.\n\n\n# /etc/redis.conf\n# bind 127.0.0.1\nbind 0.0.0.0\n\n\n\n\nyou can nmap port 6379, be sure to restart redis\n\n\nAgain I would refer you to the Arch Wiki",
"title": "Quick Dirty Redis Nspawn Container on Arch Linux"
},
{
"location": "/arch_redis_nspawn/#quick-dirty-redis-nspawn-container-on-arch-linux",
"text": "Refer to the Nspawn page for setting up the nspawn container,\ninstall redis, and start/enable redis.service.\nOnce you have the container running, it seems all you have to do to get\nthings working in a container subnet is to change the bind address. # /etc/redis.conf\n# bind 127.0.0.1\nbind 0.0.0.0 you can nmap port 6379, be sure to restart redis Again I would refer you to the Arch Wiki",
"title": "Quick Dirty Redis Nspawn Container on Arch Linux"
},
{
"location": "/arch_postgresql_nspawn/",
"text": "Quick Dirty Postgresql Nspawn Container on Arch Linux\n\n\nRefer to the \nNspawn\n page for setting up the nspawn container.\n\nAnd then refer the \nArchWiki instructions\n\nfor postgresql. \n\n\nYou'll want to install postgresql, set a password for the default user \npostgres\n,\nand then login as postgres and initilize the database. \n\n\npacman -S postgresql\n# passwd for postgresql user \npasswd postgres \n# login as postgres \nsu -l postgres\n# initialize the databse cluster\n[postgres]$ initdb --locale $LANG -E UTF8 -D '/var/lib/postgres/data'\n\n\n\n\nYou'll need to configure \n/var/lib/postgres/data/pg_hba.conf\n and\n\n/var/lib/postgres/data/postgresql.conf\n for remote access,\npresumably with an identd daemon in mind. The ident daemon will\nlisten on port 113, not on the machine with the database server,\nbut it listens from the machine where is the client that remotely\nwants to access the database.",
"title": "Quick Dirty Postgresql Nspawn Container on Arch Linux"
},
{
"location": "/arch_postgresql_nspawn/#quick-dirty-postgresql-nspawn-container-on-arch-linux",
"text": "Refer to the Nspawn page for setting up the nspawn container. \nAnd then refer the ArchWiki instructions \nfor postgresql. You'll want to install postgresql, set a password for the default user postgres ,\nand then login as postgres and initilize the database. pacman -S postgresql\n# passwd for postgresql user \npasswd postgres \n# login as postgres \nsu -l postgres\n# initialize the databse cluster\n[postgres]$ initdb --locale $LANG -E UTF8 -D '/var/lib/postgres/data' You'll need to configure /var/lib/postgres/data/pg_hba.conf and /var/lib/postgres/data/postgresql.conf for remote access,\npresumably with an identd daemon in mind. The ident daemon will\nlisten on port 113, not on the machine with the database server,\nbut it listens from the machine where is the client that remotely\nwants to access the database.",
"title": "Quick Dirty Postgresql Nspawn Container on Arch Linux"
},
{
"location": "/self_signed_certs/",
"text": "Setting up Self-Signed Certs\n\n\nThis \njamielinux\n\nblog post looks promising.",
"title": "Self Signed Certs"
},
{
"location": "/self_signed_certs/#setting-up-self-signed-certs",
"text": "This jamielinux \nblog post looks promising.",
"title": "Setting up Self-Signed Certs"
}
]
}