{ "docs": [ { "location": "/", "text": "Welcome to Trent Docs\n\n\nGit Repo For These Docs\n\n\nObviously, the commit history will reflect the time when these documents are written.\n\n\n\n\nServe And Share Apps From Your Phone With Fdroid\n\n\nNspawn Containers\n\n\nMastodon on Arch\n\n\nDebian Nspawn Container On Arch For Testing Apache Configurations\n\n\nDynamic Cacheing Nginx Reverse Proxy For Pacman\n\n\nFreeBSD Jails on FreeNAS\n \n\n\nQuick Dirty Redis Nspawn Container on Arch Linux\n\n\nQuick Dirty Postgresql Nspawn Container on Arch Linux\n\n\nSelf Signed Certs", "title": "Home" }, { "location": "/#welcome-to-trent-docs", "text": "", "title": "Welcome to Trent Docs" }, { "location": "/#git-repo-for-these-docs", "text": "Obviously, the commit history will reflect the time when these documents are written. Serve And Share Apps From Your Phone With Fdroid Nspawn Containers Mastodon on Arch Debian Nspawn Container On Arch For Testing Apache Configurations Dynamic Cacheing Nginx Reverse Proxy For Pacman FreeBSD Jails on FreeNAS Quick Dirty Redis Nspawn Container on Arch Linux Quick Dirty Postgresql Nspawn Container on Arch Linux Self Signed Certs", "title": "Git Repo For These Docs" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/", "text": "Serve And Share Apps From Your Phone With Fdroid\n\n\nThis can speed up the process of updating apps on your devices, especially if fdroid is slow. \n\n\nStep 3: you are born on third base, find the menu item for \nSwap apps\n on phone one\n\n\nOpen fdroid, and navigate to the menu by touching three dots in upper right hand corner of the screen. Select \nSwap apps\n.\n\n\n\n\nStep 4: enable the repo server on phone one\n\n\nOn the next screen toggle on \nVisible via Wi-Fi\n\n\n\n\nStep 5: a small step for your android\n\n\nAt the bottom of the screen select \nSCAN QR CODE\n\n\n\n\nStep 6: choose which apps to serve from phone one\n\n\nAt the next screen \nChoose Apps\n you want to xerve I mean serve and then touch the -> right arrow to proceed\n\n\n\n\nStep 7: another small step for your android\n\n\nTouch the -> right arrow again, do it.\n\n\n\n\nOcho: <- this means step eight\n\n\nTouch the -> right arrow until you are coming here\n\n\n\nNotice you can use either a qr code or a local url, so grab one of your other phones.\n\n\nPrivacy Friendly Qr Scanner\n appears to be a good Qr scanner,\nbut of course you can key in the url by hand too.\n\n\nStep 9: find the menu item for \nRepositories\n on phone two\n\n\nOn your other phone open fdroid, navigate to menu by selecting the 3 dots in the upper right hand corner and choose \nRepositories\n\n\n\n\nStep 10: (temporarily) toggle off the remote repos on phone two\n\n\nToggle all the current repos off and then if you want to key in the new local repo url by hand touch the + plus in the upper right hand corner\n\n\n\n\nStep 11 A: key in the local repo url by hand on phone two\n\n\nAfter touching the + plus button in \nStep Ten\n on phone two, you can fill in the url address that corresponds to the photo in \nOcho\n\n\n\n\nStep 12 A: or scan in the local repo url with qr code on phone two\n\n\nIf you prefer not to key in the url by hand, on phone two touch the\nhome button and then open your qr-scanning application and scan the\nqr code on phone one, as seen in photo \nOcho\n. The qr-scanning\napp will direct you to open fdroid, and your result will be the same as\nthe photo in \nStep Eleven A\n\n\nStep 13: profit from moar faster local downloads\n\n\nOn phone two you can now download and install apps and updates from phone one, and the download speed will be much faster than from the internet.\n\n\n\n\nStep 14: how to remember all this?\n\n\nYou can bookmark.\n\n\nIn fact, you can add a shortcut icon directly to \n\nthis page\n,\non your home screen,\nas seen here with IceCat, a debranded build of the latest extended-support-release\nof FireFox for Android.\n\n\nOr you can clone \nthe git repo\n\nwhich this site automatically builds itself from.", "title": "Serve And Share Apps From Your Phone With Fdroid" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#serve-and-share-apps-from-your-phone-with-fdroid", "text": "This can speed up the process of updating apps on your devices, especially if fdroid is slow.", "title": "Serve And Share Apps From Your Phone With Fdroid" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-3-you-are-born-on-third-base-find-the-menu-item-for-swap-apps-on-phone-one", "text": "", "title": "Step 3: you are born on third base, find the menu item for Swap apps on phone one" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#open-fdroid-and-navigate-to-the-menu-by-touching-three-dots-in-upper-right-hand-corner-of-the-screen-select-swap-apps", "text": "", "title": "Open fdroid, and navigate to the menu by touching three dots in upper right hand corner of the screen. Select Swap apps." }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-4-enable-the-repo-server-on-phone-one", "text": "", "title": "Step 4: enable the repo server on phone one" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#on-the-next-screen-toggle-on-visible-via-wi-fi", "text": "", "title": "On the next screen toggle on Visible via Wi-Fi" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-5-a-small-step-for-your-android", "text": "", "title": "Step 5: a small step for your android" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#at-the-bottom-of-the-screen-select-scan-qr-code", "text": "", "title": "At the bottom of the screen select SCAN QR CODE" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-6-choose-which-apps-to-serve-from-phone-one", "text": "", "title": "Step 6: choose which apps to serve from phone one" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#at-the-next-screen-choose-apps-you-want-to-xerve-i-mean-serve-and-then-touch-the-right-arrow-to-proceed", "text": "", "title": "At the next screen Choose Apps you want to xerve I mean serve and then touch the -> right arrow to proceed" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-7-another-small-step-for-your-android", "text": "", "title": "Step 7: another small step for your android" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#touch-the-right-arrow-again-do-it", "text": "", "title": "Touch the -> right arrow again, do it." }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#ocho-this-means-step-eight", "text": "", "title": "Ocho: <- this means step eight" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#touch-the-right-arrow-until-you-are-coming-here", "text": "Notice you can use either a qr code or a local url, so grab one of your other phones. Privacy Friendly Qr Scanner appears to be a good Qr scanner,\nbut of course you can key in the url by hand too.", "title": "Touch the -> right arrow until you are coming here" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-9-find-the-menu-item-for-repositories-on-phone-two", "text": "", "title": "Step 9: find the menu item for Repositories on phone two" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#on-your-other-phone-open-fdroid-navigate-to-menu-by-selecting-the-3-dots-in-the-upper-right-hand-corner-and-choose-repositories", "text": "", "title": "On your other phone open fdroid, navigate to menu by selecting the 3 dots in the upper right hand corner and choose Repositories" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-10-temporarily-toggle-off-the-remote-repos-on-phone-two", "text": "", "title": "Step 10: (temporarily) toggle off the remote repos on phone two" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#toggle-all-the-current-repos-off-and-then-if-you-want-to-key-in-the-new-local-repo-url-by-hand-touch-the-plus-in-the-upper-right-hand-corner", "text": "", "title": "Toggle all the current repos off and then if you want to key in the new local repo url by hand touch the + plus in the upper right hand corner" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-11-a-key-in-the-local-repo-url-by-hand-on-phone-two", "text": "", "title": "Step 11 A: key in the local repo url by hand on phone two" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#after-touching-the-plus-button-in-step-ten-on-phone-two-you-can-fill-in-the-url-address-that-corresponds-to-the-photo-in-ocho", "text": "", "title": "After touching the + plus button in Step Ten on phone two, you can fill in the url address that corresponds to the photo in Ocho" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-12-a-or-scan-in-the-local-repo-url-with-qr-code-on-phone-two", "text": "If you prefer not to key in the url by hand, on phone two touch the\nhome button and then open your qr-scanning application and scan the\nqr code on phone one, as seen in photo Ocho . The qr-scanning\napp will direct you to open fdroid, and your result will be the same as\nthe photo in Step Eleven A", "title": "Step 12 A: or scan in the local repo url with qr code on phone two" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-13-profit-from-moar-faster-local-downloads", "text": "", "title": "Step 13: profit from moar faster local downloads" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#on-phone-two-you-can-now-download-and-install-apps-and-updates-from-phone-one-and-the-download-speed-will-be-much-faster-than-from-the-internet", "text": "", "title": "On phone two you can now download and install apps and updates from phone one, and the download speed will be much faster than from the internet." }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#step-14-how-to-remember-all-this", "text": "", "title": "Step 14: how to remember all this?" }, { "location": "/serve_and_share_apps_from_your_phone_with_fdroid/#you-can-bookmark", "text": "In fact, you can add a shortcut icon directly to this page ,\non your home screen,\nas seen here with IceCat, a debranded build of the latest extended-support-release\nof FireFox for Android. \nOr you can clone the git repo \nwhich this site automatically builds itself from.", "title": "You can bookmark." }, { "location": "/nspawn/", "text": "Nspawn Containers\n\n\nThis Link For Arch Linux Wiki for Nspawn Containers\n\n\nI like the idea of starting with the easy containers first.\n\n\nCreate a FileSystem\n\n\ncd /var/lib/machines\n# create a directory\nmkdir \n# use pacstrap to create a file system\npacstrap -i -c -d base --ignore linux\n\n\n\n\nAt this point you might want to copy over some configs to save time later.\n\n\n\n\n/etc/locale.conf\n\n\n/root/.bashrc\n\n\n/etc/locale.gen\n\n\n\n\nFirst boot and create root password\n\n\nsystemd-nspawn -b -D \npasswd\n# assuming you copied over /etc/locale.gen\nlocale-gen\n# set timezone\ntimedatectl set-timezone \n# enable network time\ntimedatectl set-ntp 1\n# enable networking\nsystemctl enable systemd-networkd\nsystemctl enable systemd-resolved\npoweroff\n# if you want to nat the container add *-n* flag\nsystemd-nspawn -b -D -n\n# and to bind mount the package cache\nsystemd-nspawn -b -D -n --bind=/var/cache/pacman/pkg\n\n\n\n\nNetworking\n\n\nHere's a link that skips ahead to \nAutomatically Starting the Container\n\n\nOn Arch, assuming you have systemd-networkd and systemd-resolved\nset up correctly, networking from the host end of things should\njust work.\n\nHowever on Linode it does not. What does work on Linode is to create\na bridge interface. Two files for br0 will get the job done.\n\n\n# /etc/systemd/network/50-br0.netdev\n[NetDev]\nName=br0\nKind=bridge\n\n\n\n\n# /etc/systemd/network/50-br0.netdev\n[Match]\nName=br0\n\n[Network]\nAddress=10.0.55.1/24 # arbitrarily pick a subnet range to taste\nDHCPServer=yes\nIPMasquerade=yes\n\n\n\n\nNotice how the configuration file tells systemd-networkd to offer\nDHCP service and to perform masquerade. You can modify the \nsystemd-nspawn\n\ncommand to use the bridge interface. Every container attached to this bridge\nwill be on the same subnet and able to talk to each other.\n\n\n# first restart systemd-networkd to bring up the new bridge interface\nsystemctl restart systemd-networkd\n# and add --network-bridge=br0 to systemd-nspawn command\nsystemd-nspawn -b -D --network-bridge=br0 --bind=/var/cache/pacman/pkg\n\n\n\n\nAutomatically Starting the Container\n\n\nHere's a link back up to \nNetworking\n\nin case you previously skipped ahead.\n\n\nThere are two ways to automate starting the container. You can override\n\nsystemd-nspawn@.service\n or create an \nnspawn\n file. \n\n\nFirst enable machines.target\n\n\n# to override the systemd-nspawn@.service file\ncp /lib/systemd/system/systemd-nspawn@.service /etc/systemd/system/systemd-nspawn@.service\n\n\n\n\nEdit \n/etc/systemd/system/systemd-nspawn@.service\n to add the \nsystemd-nspawn\n options\nyou want to the \nExecStart\n command.\n\n\nOr create \n/etc/systemd/nspawn/.nspawn\n\n\n# /etc/systemd/nspawn/.nspawn\n[Files]\nBind=/var/cache/pacman/pkg\n\n[Network]\nBridge=br0\n\n\n\n\n# /etc/systemd/nspawn/.nspawn\n[Files]\nBind=/var/cache/pacman/pkg\n\n[Network]\nVirtualEthernet=1 # this seems to be the default sometimes, though\n\n\n\n\n# in either case\nsystemctl start/enable systemd-nspawn@\n# to get a shell\nmachinectl shell \n# and then to get an environment\nbash\n\n\n\n\nThis would be a good time to check for network and name resolution,\nsymlink resolv.conf if need be.\n\n\nInitial Configuration Inside The Container\n\n\n# set time zone if you don't want UTC\ntimedatectl set-timezone \n# enable ntp, networktime\ntimedatectl set-ntp 1\n# enable networking from inside the container\nsystemctl enable systemd-networkd\nsystemctl start systemd-networkd\nsystemctl enable systemd-resolved\nsystemctl start systemd-resolved\nrm /etc/resolv.conf \nln -s /run/systemd/resolve/resolv.conf /etc/\n# ping google\nping -c 3 google.com\n\n\n\n\nIf you want to change the locale\n\n\nFinal Observations\n\n\n\n\nYou can start/stop nspawn containers with \nmachinectl\n command. \n\n\nYou can start nspawn containers with \nsystemd-nspawn\n command.\n\n\nYou can configure the systemd service for a container with @nspawn.service file override\n\n\nOr you can configure an nspawn container with a dot.nspawn file\n\n\n\n\nBut in regards to the above list\nI have noticed differences in behaviour,\nin some scenarios, concerning file attributes\nfor bind mounts.\n\n\nAnother curiosity: when you have nspawn containers natted on VirtualEthernet connections,\nthey might be able to ping each other at 10.x.y.z, but not resolve each other. But they might\nbe able to resolve each other if they are all connected to the same bridge interface or nspawn\nnetwork zone, but will randomly resolve each other in any of the 10.x.y.z, 169.x.y.z,\nor fe80::....:....:....%host (ipv6 local) spaces, which would complicate configuring the containers\nto talk to each other. But I intend to look into this some more.", "title": "Nspawn" }, { "location": "/nspawn/#nspawn-containers", "text": "This Link For Arch Linux Wiki for Nspawn Containers I like the idea of starting with the easy containers first.", "title": "Nspawn Containers" }, { "location": "/nspawn/#create-a-filesystem", "text": "cd /var/lib/machines\n# create a directory\nmkdir \n# use pacstrap to create a file system\npacstrap -i -c -d base --ignore linux At this point you might want to copy over some configs to save time later. /etc/locale.conf /root/.bashrc /etc/locale.gen", "title": "Create a FileSystem" }, { "location": "/nspawn/#first-boot-and-create-root-password", "text": "systemd-nspawn -b -D \npasswd\n# assuming you copied over /etc/locale.gen\nlocale-gen\n# set timezone\ntimedatectl set-timezone \n# enable network time\ntimedatectl set-ntp 1\n# enable networking\nsystemctl enable systemd-networkd\nsystemctl enable systemd-resolved\npoweroff\n# if you want to nat the container add *-n* flag\nsystemd-nspawn -b -D -n\n# and to bind mount the package cache\nsystemd-nspawn -b -D -n --bind=/var/cache/pacman/pkg", "title": "First boot and create root password" }, { "location": "/nspawn/#networking", "text": "Here's a link that skips ahead to Automatically Starting the Container On Arch, assuming you have systemd-networkd and systemd-resolved\nset up correctly, networking from the host end of things should\njust work. \nHowever on Linode it does not. What does work on Linode is to create\na bridge interface. Two files for br0 will get the job done. # /etc/systemd/network/50-br0.netdev\n[NetDev]\nName=br0\nKind=bridge # /etc/systemd/network/50-br0.netdev\n[Match]\nName=br0\n\n[Network]\nAddress=10.0.55.1/24 # arbitrarily pick a subnet range to taste\nDHCPServer=yes\nIPMasquerade=yes Notice how the configuration file tells systemd-networkd to offer\nDHCP service and to perform masquerade. You can modify the systemd-nspawn \ncommand to use the bridge interface. Every container attached to this bridge\nwill be on the same subnet and able to talk to each other. # first restart systemd-networkd to bring up the new bridge interface\nsystemctl restart systemd-networkd\n# and add --network-bridge=br0 to systemd-nspawn command\nsystemd-nspawn -b -D --network-bridge=br0 --bind=/var/cache/pacman/pkg", "title": "Networking" }, { "location": "/nspawn/#automatically-starting-the-container", "text": "Here's a link back up to Networking \nin case you previously skipped ahead. There are two ways to automate starting the container. You can override systemd-nspawn@.service or create an nspawn file. First enable machines.target # to override the systemd-nspawn@.service file\ncp /lib/systemd/system/systemd-nspawn@.service /etc/systemd/system/systemd-nspawn@.service Edit /etc/systemd/system/systemd-nspawn@.service to add the systemd-nspawn options\nyou want to the ExecStart command. Or create /etc/systemd/nspawn/.nspawn # /etc/systemd/nspawn/.nspawn\n[Files]\nBind=/var/cache/pacman/pkg\n\n[Network]\nBridge=br0 # /etc/systemd/nspawn/.nspawn\n[Files]\nBind=/var/cache/pacman/pkg\n\n[Network]\nVirtualEthernet=1 # this seems to be the default sometimes, though # in either case\nsystemctl start/enable systemd-nspawn@\n# to get a shell\nmachinectl shell \n# and then to get an environment\nbash This would be a good time to check for network and name resolution,\nsymlink resolv.conf if need be.", "title": "Automatically Starting the Container" }, { "location": "/nspawn/#initial-configuration-inside-the-container", "text": "# set time zone if you don't want UTC\ntimedatectl set-timezone \n# enable ntp, networktime\ntimedatectl set-ntp 1\n# enable networking from inside the container\nsystemctl enable systemd-networkd\nsystemctl start systemd-networkd\nsystemctl enable systemd-resolved\nsystemctl start systemd-resolved\nrm /etc/resolv.conf \nln -s /run/systemd/resolve/resolv.conf /etc/\n# ping google\nping -c 3 google.com If you want to change the locale", "title": "Initial Configuration Inside The Container" }, { "location": "/nspawn/#final-observations", "text": "You can start/stop nspawn containers with machinectl command. You can start nspawn containers with systemd-nspawn command. You can configure the systemd service for a container with @nspawn.service file override Or you can configure an nspawn container with a dot.nspawn file But in regards to the above list\nI have noticed differences in behaviour,\nin some scenarios, concerning file attributes\nfor bind mounts. Another curiosity: when you have nspawn containers natted on VirtualEthernet connections,\nthey might be able to ping each other at 10.x.y.z, but not resolve each other. But they might\nbe able to resolve each other if they are all connected to the same bridge interface or nspawn\nnetwork zone, but will randomly resolve each other in any of the 10.x.y.z, 169.x.y.z,\nor fe80::....:....:....%host (ipv6 local) spaces, which would complicate configuring the containers\nto talk to each other. But I intend to look into this some more.", "title": "Final Observations" }, { "location": "/mastodon_on_arch/", "text": "Some Observations About Installing Mastodon on Arch.\n\n\nNginx\n\n\nFrom the \nProduction Guide\n\nyou can copy the example nginx.conf file to \n/etc/nginx/sites-enabled/some_arbitrary.conf\n,\nand then add the following to \n/etc/nginx/nginx.conf\n in the http section,\nthis with a fresh install of nginx with the default configuration file.\n\n\n# /etc/nginx/nginx.conf \nhttp {\n include sites-enabled/*;\n}\n\n\n\n\nInstalling the Dependancies\n\n\npacman -S certbot nginx libxml2 imagemagick ffmpeg git yarn npm python2 oidentd\n\n\n\n\n# I'm guessing here\npacman -S libpqxx libxslt protobuf protobuf-c\n\n\n\n\n\n\nI'm assuming base-devel is installed\n\n\npython2 seems to be required to run \nyarn install\n command later on\n\n\noidentd seems to be a usable replacement for pident\n\n\nlibpqxx pulls in postgresql-libs\n\n\nfile is already installed\n\n\ncurl is already installed\n\n\nruby-build and rbenv are installable from aur\n\n\nalso postgresql and redis unless, those are in another container or whatever.\n\n\n\n\nOther Observations\n\n\nI discovered that between \ngem install bundler\n and\n\n\nbundle install --deployment --without development test\n,\nyou have to update your environment, with \n\neval \"$(rbenv init -)\"\n, i.e.\n\n\necho 'eval \"$(rbenv init -)\"' >> .bashrc\n# and then\n. ~/.bashrc\n\n\n\n\nYou have to update your environment more than once, during the\ninstallation.\n\n\nPresumably you don't ever want to delete the \n~/live/Public/\n directory\nif that is where assets are being stored, but it seems ok to delete \n\n~/live/node_modules\n and then rerun the \nyarn install\n command.\n\n\nIn \n~/live/.env.production\n, \nSINGLE_USER_MODE=false\n has to be set\nto \nfalse\n until at least one user is created, or the web service won't \neven start. (Also \nchmod 755 ~/\n)", "title": "Mastodon on Arch" }, { "location": "/mastodon_on_arch/#some-observations-about-installing-mastodon-on-arch", "text": "", "title": "Some Observations About Installing Mastodon on Arch." }, { "location": "/mastodon_on_arch/#nginx", "text": "From the Production Guide \nyou can copy the example nginx.conf file to /etc/nginx/sites-enabled/some_arbitrary.conf ,\nand then add the following to /etc/nginx/nginx.conf in the http section,\nthis with a fresh install of nginx with the default configuration file. # /etc/nginx/nginx.conf \nhttp {\n include sites-enabled/*;\n}", "title": "Nginx" }, { "location": "/mastodon_on_arch/#installing-the-dependancies", "text": "pacman -S certbot nginx libxml2 imagemagick ffmpeg git yarn npm python2 oidentd # I'm guessing here\npacman -S libpqxx libxslt protobuf protobuf-c I'm assuming base-devel is installed python2 seems to be required to run yarn install command later on oidentd seems to be a usable replacement for pident libpqxx pulls in postgresql-libs file is already installed curl is already installed ruby-build and rbenv are installable from aur also postgresql and redis unless, those are in another container or whatever.", "title": "Installing the Dependancies" }, { "location": "/mastodon_on_arch/#other-observations", "text": "I discovered that between gem install bundler and bundle install --deployment --without development test ,\nyou have to update your environment, with eval \"$(rbenv init -)\" , i.e. echo 'eval \"$(rbenv init -)\"' >> .bashrc\n# and then\n. ~/.bashrc You have to update your environment more than once, during the\ninstallation. Presumably you don't ever want to delete the ~/live/Public/ directory\nif that is where assets are being stored, but it seems ok to delete ~/live/node_modules and then rerun the yarn install command. In ~/live/.env.production , SINGLE_USER_MODE=false has to be set\nto false until at least one user is created, or the web service won't \neven start. (Also chmod 755 ~/ )", "title": "Other Observations" }, { "location": "/debian_nspawn_container_on_arch_for_testing_apache_configurations/", "text": "Debian Nspawn Container On Arch For Testing Apache Configurations\n\n\nBegin by exporting the environmental variable for your squid cacheing \nproxy. If you're deboostrapping Debian file systems, the best way to\nspeed this up is with squid.\n\n\nThe ArchWiki page for nspawn containers has a\n\nDebian/Ubuntu subsection\n\nObviously you're going to want to install debootstrap and debian-archive-keyring.\n\n\n# to create a Stretch Container\ncd /var/lib/machines \nmkdir \ndeboostrap stretch \n\n\n\n\nAfter some experimentation, perhaps this is the best time to write\nthe intended hostname into the container, and write any\napt-cacher or apt-cacher-ng proxies into /etc/apt/apt.conf \non the container.\n\n\ncp apt.conf /etc/apt/apt.conf \necho \"\" > /var/lib/machines//etc/hostname\n\n\n\n\nAnd then start the container, and set the root password.\n\n\n# boot in interactive mode\nsystemd-nspawn -D \n# set the passwd and logout\npassword \nlogout \n\n\n\n\nNow we can boot the container in non-interactive mode, either\nfrom the command line or using nspawn files. In either case \ndouble check that the your bind mounts have the correct permissions \nfrom inside the container.\n\n\n# for instance attached to a bridge interface br0 \nsystemd-nspawn -b -D --network-bridge=br0\n# or if you've set up a package cache \nsystemd-nspawn -b -D --network-bridge=br0 --bind=/var/cache/apt/archives\n\n\n\n\nAlternately, if you use an nspawn file, then you can use a command \nsimilar to the following to start it, you'll first need to \nboot the container from the command line and install dbus,\nbecause \nmachinectl shell\n and \nmachinectl login\n won't work \nwithout dbus. In this case use the following sequence of commands.\n\n\n# start the container and login as root\nsystemd-nspawn -b -D --network-bridge=br0 \n# bring up networking so you can install dbus\nsystemctl enable/start systemd-networkd\n# this is also a good time to install and configure locale\napt install dbus locales \n# to configure locale \ndpkg-reconfigure locales \npoweroff\n\n\n\n\nAfter this you can start the container with systemd, when \nusing an nspawn file.\n\n\nsystemctl start systemd-nspawn@\n\n\n\n\n# /etc/systemd/nspawn/.spawn \n[Files] \n# Bind=/var/cache/apt/archives \n\n[Network] \nbridge=br0 \n\n\n\n\nYou can use tasksel to install a web-server.\n\n\n# apache2 will immediately be listening on port 80\ntasksel install web-server\n# enable mod ssl\na2enmod ssl ; systemctl restart apache2\n# enable the default ssl test page \na2ensite default-ssl.conf ; systemctl reload apache2\n\n\n\n\nYou'll be up and running with the default self-signed certs.", "title": "Debian Nspawn Container On Arch For Testing Apache Configurations" }, { "location": "/debian_nspawn_container_on_arch_for_testing_apache_configurations/#debian-nspawn-container-on-arch-for-testing-apache-configurations", "text": "Begin by exporting the environmental variable for your squid cacheing \nproxy. If you're deboostrapping Debian file systems, the best way to\nspeed this up is with squid. The ArchWiki page for nspawn containers has a Debian/Ubuntu subsection \nObviously you're going to want to install debootstrap and debian-archive-keyring. # to create a Stretch Container\ncd /var/lib/machines \nmkdir \ndeboostrap stretch After some experimentation, perhaps this is the best time to write\nthe intended hostname into the container, and write any\napt-cacher or apt-cacher-ng proxies into /etc/apt/apt.conf \non the container. cp apt.conf /etc/apt/apt.conf \necho \"\" > /var/lib/machines//etc/hostname And then start the container, and set the root password. # boot in interactive mode\nsystemd-nspawn -D \n# set the passwd and logout\npassword \nlogout Now we can boot the container in non-interactive mode, either\nfrom the command line or using nspawn files. In either case \ndouble check that the your bind mounts have the correct permissions \nfrom inside the container. # for instance attached to a bridge interface br0 \nsystemd-nspawn -b -D --network-bridge=br0\n# or if you've set up a package cache \nsystemd-nspawn -b -D --network-bridge=br0 --bind=/var/cache/apt/archives Alternately, if you use an nspawn file, then you can use a command \nsimilar to the following to start it, you'll first need to \nboot the container from the command line and install dbus,\nbecause machinectl shell and machinectl login won't work \nwithout dbus. In this case use the following sequence of commands. # start the container and login as root\nsystemd-nspawn -b -D --network-bridge=br0 \n# bring up networking so you can install dbus\nsystemctl enable/start systemd-networkd\n# this is also a good time to install and configure locale\napt install dbus locales \n# to configure locale \ndpkg-reconfigure locales \npoweroff After this you can start the container with systemd, when \nusing an nspawn file. systemctl start systemd-nspawn@ # /etc/systemd/nspawn/.spawn \n[Files] \n# Bind=/var/cache/apt/archives \n\n[Network] \nbridge=br0 You can use tasksel to install a web-server. # apache2 will immediately be listening on port 80\ntasksel install web-server\n# enable mod ssl\na2enmod ssl ; systemctl restart apache2\n# enable the default ssl test page \na2ensite default-ssl.conf ; systemctl reload apache2 You'll be up and running with the default self-signed certs.", "title": "Debian Nspawn Container On Arch For Testing Apache Configurations" }, { "location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/", "text": "Dynamic Cacheing Nginx Reverse Proxy For Pacman\n\n\nYou set up a dynamic cacheing reverse proxy and then you put the ip address or hostname for that server in \n/etc/pacman.d/mirrorlist\n on your client machines.\n\n\nOf course if you want to you can set this up and run it in an\n\nNspawn Container\n.\nThe \nArchWiki Page for pacman tips\n\nmostly spells out what to do, but I want to document\nthe exact steps I would take.\n\n\nAs for how you would run this on a server with other virtual hosts?\nWho cares? That is what is so brilliant about using using an\nnspawn container, in that it behaves like just another\ncomputer on the lan with it's own ip address. But it only does one\nthing, and that's all you have to configure it for.\n\n\nI see no reason to use nginx-mainline instead of stable.\n\n\npacman -S nginx\n\n\n\n\nThe suggested configuration in the Arch Wiki\nis to create a directory \n/srv/http/pacman-cache\n,\nand that seems to work well enough\n\n\nmkdir /srv/http/pacman-cache\n# and then change it's ownershipt\nchown http:http /srv/http/pacman-cache\n\n\n\n\nnginx configuration\n\n\nand then it references an nginx.conf in\n\nthis gist\n,\nbut that is not a complete nginx.conf and so here is a method to get that\nworking as of July 2017 with a fresh install of nginx.\n\n\nYou can start with a default \n/etc/nginx/nginx.conf\n,\nand add the line \ninclude sites-enabled/*;\n\nat the end of the \nhttp\n section.\n\n\n# /etc/nginx/nginx.conf\n#user html;\nworker_processes 1;\n\n#error_log logs/error.log;\n#error_log logs/error.log notice;\n#error_log logs/error.log info;\n\n#pid logs/nginx.pid;\n\n\nevents {\n worker_connections 1024;\n}\n\n\nhttp {\n include mime.types;\n default_type application/octet-stream;\n\n #log_format main '$remote_addr - $remote_user [$time_local] \"$request\" '\n # '$status $body_bytes_sent \"$http_referer\" '\n # '\"$http_user_agent\" \"$http_x_forwarded_for\"';\n\n #access_log logs/access.log main;\n\n sendfile on;\n #tcp_nopush on;\n\n #keepalive_timeout 0;\n keepalive_timeout 65;\n\n #gzip on;\n\n server {\n listen 80;\n server_name localhost;\n\n #charset koi8-r;\n\n #access_log logs/host.access.log main;\n\n location / {\n root /usr/share/nginx/html;\n index index.html index.htm;\n }\n\n #error_page 404 /404.html;\n\n # redirect server error pages to the static page /50x.html\n #\n error_page 500 502 503 504 /50x.html;\n location = /50x.html {\n root /usr/share/nginx/html;\n }\n\n # proxy the PHP scripts to Apache listening on 127.0.0.1:80\n #\n #location ~ \\.php$ {\n # proxy_pass http://127.0.0.1;\n #}\n\n # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000\n #\n #location ~ \\.php$ {\n # root html;\n # fastcgi_pass 127.0.0.1:9000;\n # fastcgi_index index.php;\n # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;\n # include fastcgi_params;\n #}\n\n # deny access to .htaccess files, if Apache's document root\n # concurs with nginx's one\n #\n #location ~ /\\.ht {\n # deny all;\n #}\n }\n\n\n # another virtual host using mix of IP-, name-, and port-based configuration\n #\n #server {\n # listen 8000;\n # listen somename:8080;\n # server_name somename alias another.alias;\n\n # location / {\n # root html;\n # index index.html index.htm;\n # }\n #}\n\n\n # HTTPS server\n #\n #server {\n # listen 443 ssl;\n # server_name localhost;\n\n # ssl_certificate cert.pem;\n # ssl_certificate_key cert.key;\n\n # ssl_session_cache shared:SSL:1m;\n # ssl_session_timeout 5m;\n\n # ssl_ciphers HIGH:!aNULL:!MD5;\n # ssl_prefer_server_ciphers on;\n\n # location / {\n # root html;\n # index index.html index.htm;\n # }\n #}\n include sites-enabled/*;\n\n}\n\n\n\n\nAnd then create the directory \n/etc/nginx/sites-enabled\n\n\nmkdir /etc/nginx/sites-enabled\n\n\n\n\nAnd then create \n/etc/nginx/sites-enabled/proxy_cache.conf\n,\nwhich is \nmostly\n a\n\ncopy-and-paste from this gist\n.\n\n\nNotice the \nserver_name\n. This has to match the entry in\n\n/etc/pacman.d/mirrorlist\n on the client machines you are\nupdating from. If you can use the hostname, great. But if you\nhave to assign static ip addresses and explicitly write the local\nip address instead, then that should match what you write in your mirrorlist.\n\n\nAnd of course your mirrorlist entry\non the client machine, has to preserve the directory scheme.\n\n\n# /etc/pacman.d/mirrorlist\nServer = http://:/archlinux/$repo/os/$arch\n\n\n\n\n# /etc/nginx/sites-enabled/proxy_cache.conf\n# nginx may need to resolve domain names at run time\nresolver 8.8.8.8 8.8.4.4;\n\n# Pacman Cache\nserver\n{\nlisten 80;\nserver_name ; # has to match the entry in mirrorlist on client machine.\nroot /srv/http/pacman-cache;\nautoindex on;\n\n # Requests for package db and signature files should redirect upstream without caching\n # Well that's the default anyway.\n # But what if you're spinning up a lot of nspawn containers, don't want to waste all that bandwidth?\n # I choose to instead run a systemd timer that deletes the *db files once every 15 minutes\n location ~ \\.(db|sig)$ {\n try_files $uri @pkg_mirror;\n # proxy_pass http://mirrors$request_uri;\n }\n\n # Requests for actual packages should be served directly from cache if available.\n # If not available, retrieve and save the package from an upstream mirror.\n location ~ \\.tar\\.xz$ {\n try_files $uri @pkg_mirror;\n }\n\n # Retrieve package from upstream mirrors and cache for future requests\n location @pkg_mirror {\n proxy_store on;\n proxy_redirect off;\n proxy_store_access user:rw group:rw all:r;\n proxy_next_upstream error timeout http_404;\n proxy_pass http://mirrors$request_uri;\n }\n}\n\n# Upstream Arch Linux Mirrors\n# - Configure as many backend mirrors as you want in the blocks below\n# - Servers are used in a round-robin fashion by nginx\n# - Add \"backup\" if you want to only use the mirror upon failure of the other mirrors\n# - Separate \"server\" configurations are required for each upstream mirror so we can set the \"Host\" header appropriately\nupstream mirrors {\nserver localhost:8001;\nserver localhost:8002; # backup\nserver localhost:8003; # backup\n}\n\n# Arch Mirror 1 Proxy Configuration\nserver\n{\nlisten 8001;\nserver_name localhost;\n\n location / {\n proxy_pass http://mirrors.kernel.org$request_uri;\n proxy_set_header Host mirrors.kernel.org;\n }\n}\n\n# Arch Mirror 2 Proxy Configuration\nserver\n{\nlisten 8002;\nserver_name localhost;\n\n location / {\n proxy_pass http://mirrors.ocf.berkeley.edu$request_uri;\n proxy_set_header Host mirrors.ocf.berkeley.edu;\n }\n}\n\n# Arch Mirror 3 Proxy Configuration\nserver\n{\n listen 8003;\n server_name localhost;\n\n location / {\n proxy_pass http://mirrors.cat.pdx.edu$request_uri;\n proxy_set_header Host mirrors.cat.pdx.edu;\n }\n}\n\n\n\n\nsystemd service that cleans the proxy cache\n\n\ndon't enable the service, enable the timer\n\n\nsystemctl enable/start /etc/systemd/system/proxy_cache_clean.timer\n\n\n\n\nKeeps the 2 most recent versions of each package using paccache command.\n\n\n# /etc/systemd/system/proxy_cache_clean.service\n[Unit]\nDescription=Clean The pacman proxy cache\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/find /srv/http/pacman-cache/ -type d -exec /usr/bin/paccache -v -r -k 2 -c {} \\;\nStandardOutput=syslog\nStandardError=syslog\n\n\n\n\nsystemd timer for the systemd service that cleans the proxy cache\n\n\n# /etc/systemd/system/proxy_cache_clean.timer\n[Unit]\nDescription=Timer for clean The pacman proxy cache\n\n[Timer]\nOnBootSec=20min\nOnUnitActiveSec=100h\nUnit=proxy_cache_clean.service\n\n[Install]\nWantedBy=timers.target\n\n\n\n\nsystemd service that deletes the pacman database files from the proxy cache\n\n\ndon't enable the service, enable the timer\n\n\nsystemctl enable/start /etc/systemd/system/proxy_cache_database_clean.timer\n\n\n\n\nYou won't need this if you don't cache the database files. But if you do cache\nthe database files, then you'll just be stuck with old database files, unless\nyou periodically delete them. But I'm not sure about all this, will keep an\neye on things.\n\n\n# /etc/systemd/system/proxy_cache_database_clean.service\n[Unit]\nDescription=Clean The pacman proxy cache database\n\n[Service]\nType=oneshot\nExecStart=/bin/bash -c \"for f in $(find /srv -name *db) ; do rm $f; done\"\nStandardOutput=syslog\nStandardError=syslog\n\n\n\n\nsystemd timer for the systemd service that deletes the pacman database files from the proxy cache\n\n\n# /etc/systemd/system/proxy_cache_database_clean.timer\n[Unit]\nDescription=Timer for clean The pacman proxy cache database\n\n[Timer]\nOnBootSec=10min\nOnUnitActiveSec=15min\nUnit=proxy_cache_database_clean.service\n\n[Install]\nWantedBy=timers.target", "title": "Dynamic Cacheing Nginx Reverse Proxy For Pacman" }, { "location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#dynamic-cacheing-nginx-reverse-proxy-for-pacman", "text": "", "title": "Dynamic Cacheing Nginx Reverse Proxy For Pacman" }, { "location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#you-set-up-a-dynamic-cacheing-reverse-proxy-and-then-you-put-the-ip-address-or-hostname-for-that-server-in-etcpacmandmirrorlist-on-your-client-machines", "text": "Of course if you want to you can set this up and run it in an Nspawn Container .\nThe ArchWiki Page for pacman tips \nmostly spells out what to do, but I want to document\nthe exact steps I would take. As for how you would run this on a server with other virtual hosts?\nWho cares? That is what is so brilliant about using using an\nnspawn container, in that it behaves like just another\ncomputer on the lan with it's own ip address. But it only does one\nthing, and that's all you have to configure it for. I see no reason to use nginx-mainline instead of stable. pacman -S nginx The suggested configuration in the Arch Wiki\nis to create a directory /srv/http/pacman-cache ,\nand that seems to work well enough mkdir /srv/http/pacman-cache\n# and then change it's ownershipt\nchown http:http /srv/http/pacman-cache", "title": "You set up a dynamic cacheing reverse proxy and then you put the ip address or hostname for that server in /etc/pacman.d/mirrorlist on your client machines." }, { "location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#nginx-configuration", "text": "and then it references an nginx.conf in this gist ,\nbut that is not a complete nginx.conf and so here is a method to get that\nworking as of July 2017 with a fresh install of nginx. You can start with a default /etc/nginx/nginx.conf ,\nand add the line include sites-enabled/*; \nat the end of the http section. # /etc/nginx/nginx.conf\n#user html;\nworker_processes 1;\n\n#error_log logs/error.log;\n#error_log logs/error.log notice;\n#error_log logs/error.log info;\n\n#pid logs/nginx.pid;\n\n\nevents {\n worker_connections 1024;\n}\n\n\nhttp {\n include mime.types;\n default_type application/octet-stream;\n\n #log_format main '$remote_addr - $remote_user [$time_local] \"$request\" '\n # '$status $body_bytes_sent \"$http_referer\" '\n # '\"$http_user_agent\" \"$http_x_forwarded_for\"';\n\n #access_log logs/access.log main;\n\n sendfile on;\n #tcp_nopush on;\n\n #keepalive_timeout 0;\n keepalive_timeout 65;\n\n #gzip on;\n\n server {\n listen 80;\n server_name localhost;\n\n #charset koi8-r;\n\n #access_log logs/host.access.log main;\n\n location / {\n root /usr/share/nginx/html;\n index index.html index.htm;\n }\n\n #error_page 404 /404.html;\n\n # redirect server error pages to the static page /50x.html\n #\n error_page 500 502 503 504 /50x.html;\n location = /50x.html {\n root /usr/share/nginx/html;\n }\n\n # proxy the PHP scripts to Apache listening on 127.0.0.1:80\n #\n #location ~ \\.php$ {\n # proxy_pass http://127.0.0.1;\n #}\n\n # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000\n #\n #location ~ \\.php$ {\n # root html;\n # fastcgi_pass 127.0.0.1:9000;\n # fastcgi_index index.php;\n # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;\n # include fastcgi_params;\n #}\n\n # deny access to .htaccess files, if Apache's document root\n # concurs with nginx's one\n #\n #location ~ /\\.ht {\n # deny all;\n #}\n }\n\n\n # another virtual host using mix of IP-, name-, and port-based configuration\n #\n #server {\n # listen 8000;\n # listen somename:8080;\n # server_name somename alias another.alias;\n\n # location / {\n # root html;\n # index index.html index.htm;\n # }\n #}\n\n\n # HTTPS server\n #\n #server {\n # listen 443 ssl;\n # server_name localhost;\n\n # ssl_certificate cert.pem;\n # ssl_certificate_key cert.key;\n\n # ssl_session_cache shared:SSL:1m;\n # ssl_session_timeout 5m;\n\n # ssl_ciphers HIGH:!aNULL:!MD5;\n # ssl_prefer_server_ciphers on;\n\n # location / {\n # root html;\n # index index.html index.htm;\n # }\n #}\n include sites-enabled/*;\n\n} And then create the directory /etc/nginx/sites-enabled mkdir /etc/nginx/sites-enabled And then create /etc/nginx/sites-enabled/proxy_cache.conf ,\nwhich is mostly a copy-and-paste from this gist . Notice the server_name . This has to match the entry in /etc/pacman.d/mirrorlist on the client machines you are\nupdating from. If you can use the hostname, great. But if you\nhave to assign static ip addresses and explicitly write the local\nip address instead, then that should match what you write in your mirrorlist. And of course your mirrorlist entry\non the client machine, has to preserve the directory scheme. # /etc/pacman.d/mirrorlist\nServer = http://:/archlinux/$repo/os/$arch # /etc/nginx/sites-enabled/proxy_cache.conf\n# nginx may need to resolve domain names at run time\nresolver 8.8.8.8 8.8.4.4;\n\n# Pacman Cache\nserver\n{\nlisten 80;\nserver_name ; # has to match the entry in mirrorlist on client machine.\nroot /srv/http/pacman-cache;\nautoindex on;\n\n # Requests for package db and signature files should redirect upstream without caching\n # Well that's the default anyway.\n # But what if you're spinning up a lot of nspawn containers, don't want to waste all that bandwidth?\n # I choose to instead run a systemd timer that deletes the *db files once every 15 minutes\n location ~ \\.(db|sig)$ {\n try_files $uri @pkg_mirror;\n # proxy_pass http://mirrors$request_uri;\n }\n\n # Requests for actual packages should be served directly from cache if available.\n # If not available, retrieve and save the package from an upstream mirror.\n location ~ \\.tar\\.xz$ {\n try_files $uri @pkg_mirror;\n }\n\n # Retrieve package from upstream mirrors and cache for future requests\n location @pkg_mirror {\n proxy_store on;\n proxy_redirect off;\n proxy_store_access user:rw group:rw all:r;\n proxy_next_upstream error timeout http_404;\n proxy_pass http://mirrors$request_uri;\n }\n}\n\n# Upstream Arch Linux Mirrors\n# - Configure as many backend mirrors as you want in the blocks below\n# - Servers are used in a round-robin fashion by nginx\n# - Add \"backup\" if you want to only use the mirror upon failure of the other mirrors\n# - Separate \"server\" configurations are required for each upstream mirror so we can set the \"Host\" header appropriately\nupstream mirrors {\nserver localhost:8001;\nserver localhost:8002; # backup\nserver localhost:8003; # backup\n}\n\n# Arch Mirror 1 Proxy Configuration\nserver\n{\nlisten 8001;\nserver_name localhost;\n\n location / {\n proxy_pass http://mirrors.kernel.org$request_uri;\n proxy_set_header Host mirrors.kernel.org;\n }\n}\n\n# Arch Mirror 2 Proxy Configuration\nserver\n{\nlisten 8002;\nserver_name localhost;\n\n location / {\n proxy_pass http://mirrors.ocf.berkeley.edu$request_uri;\n proxy_set_header Host mirrors.ocf.berkeley.edu;\n }\n}\n\n# Arch Mirror 3 Proxy Configuration\nserver\n{\n listen 8003;\n server_name localhost;\n\n location / {\n proxy_pass http://mirrors.cat.pdx.edu$request_uri;\n proxy_set_header Host mirrors.cat.pdx.edu;\n }\n}", "title": "nginx configuration" }, { "location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#systemd-service-that-cleans-the-proxy-cache", "text": "", "title": "systemd service that cleans the proxy cache" }, { "location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#dont-enable-the-service-enable-the-timer", "text": "systemctl enable/start /etc/systemd/system/proxy_cache_clean.timer Keeps the 2 most recent versions of each package using paccache command. # /etc/systemd/system/proxy_cache_clean.service\n[Unit]\nDescription=Clean The pacman proxy cache\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/find /srv/http/pacman-cache/ -type d -exec /usr/bin/paccache -v -r -k 2 -c {} \\;\nStandardOutput=syslog\nStandardError=syslog", "title": "don't enable the service, enable the timer" }, { "location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#systemd-timer-for-the-systemd-service-that-cleans-the-proxy-cache", "text": "# /etc/systemd/system/proxy_cache_clean.timer\n[Unit]\nDescription=Timer for clean The pacman proxy cache\n\n[Timer]\nOnBootSec=20min\nOnUnitActiveSec=100h\nUnit=proxy_cache_clean.service\n\n[Install]\nWantedBy=timers.target", "title": "systemd timer for the systemd service that cleans the proxy cache" }, { "location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#systemd-service-that-deletes-the-pacman-database-files-from-the-proxy-cache", "text": "", "title": "systemd service that deletes the pacman database files from the proxy cache" }, { "location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#dont-enable-the-service-enable-the-timer_1", "text": "systemctl enable/start /etc/systemd/system/proxy_cache_database_clean.timer You won't need this if you don't cache the database files. But if you do cache\nthe database files, then you'll just be stuck with old database files, unless\nyou periodically delete them. But I'm not sure about all this, will keep an\neye on things. # /etc/systemd/system/proxy_cache_database_clean.service\n[Unit]\nDescription=Clean The pacman proxy cache database\n\n[Service]\nType=oneshot\nExecStart=/bin/bash -c \"for f in $(find /srv -name *db) ; do rm $f; done\"\nStandardOutput=syslog\nStandardError=syslog", "title": "don't enable the service, enable the timer" }, { "location": "/dynamic_cacheing_nginx_reverse_proxy_for_pacman/#systemd-timer-for-the-systemd-service-that-deletes-the-pacman-database-files-from-the-proxy-cache", "text": "# /etc/systemd/system/proxy_cache_database_clean.timer\n[Unit]\nDescription=Timer for clean The pacman proxy cache database\n\n[Timer]\nOnBootSec=10min\nOnUnitActiveSec=15min\nUnit=proxy_cache_database_clean.service\n\n[Install]\nWantedBy=timers.target", "title": "systemd timer for the systemd service that deletes the pacman database files from the proxy cache" }, { "location": "/freebsd_jails_on_freenas/", "text": "FreeBSD Jails on FreeNAS\n\n\nMostly a personal distillation for getting a FreeBSD\nJail up and running on FreeNAS.\n\n\nIn The FreeNAS WebGui, Create A New Jail\n\n\nThe default networking configuration, will give\nyour jail an ip address on the lan. For now, I've\ndecided to just share a pkg cache with each jail.\nNavigate to \nJails -> Storage -> Add Storage\n and\nadd the \npkg\n storage directory to \n/var/cache/pkg\n\ninside the jail. \n\n\nFor instance, on my local FreeNAS server,\nthe pkg directory is at /mnt/VolumeOne/pkg/.\n\n\nIf you ssh into the host server, you can type the command\n\njls\n, to list the jails. Based on the output of the\ncommand \njls\n, you can get a shell with \njexec \n\nof \njexec \n.\n\n\nupdating\n\n\nHow about the command \npkg audit -F\n? Downloads a\nlist of known security issues and checks your system\nagainst that.\n\n\nI would recommend, to myself anyway, to shell into\nthe new jail with \njexec\n, run \npkg upgrade\n to install any new packages,\nand then from the FreeNAS webgui, restart the jail. Although\nthe restarted jail will have a new jail number as reported by\nthe \njls\n command.\n\n\nlocale\n\n\nWhen you use \njexec\n to get a shell, you get an environment\nwith an utf_8 locale. Not so if you ssh into the new jail.\nFor this put the following contents into ~/.login_conf\n\n\n# ~/.login_conf\nme:\\\n :charset=UTF-8:\\\n :lang=en_US.UTF-8:\\\n :setenv=LC_COLLATE=C:\n\n\n\n\nssh\n\n\nTo get ssh running, edit \n/etc/rc.conf\n inside the jail.\n\n\n# /etc/rc.conf\nsshd_enable=\"YES\"\n\n\n\n\nTo start sshd immediately, make any necessary edits to\n/etc/ssh/sshd_config, and run the following command.\n\n\nservice sshd start\n\n\n\n\nByobu\n\n\nYou'll need newt to configure byobu, and if you don't install tmux\nthen screen will become the backend.\n\n\npkg install byobu tmux newt\n\n\n\n\nIf you execute \nbyobu-config\n, by pressing \nf9\n, the\nfollowing options seem to work. Some options, of course,\nwill prevent others from working so you have to enable them\none at a time to see what happens.\n\n\n\n\ndate\n\n\ndisk\n\n\ndistro\n\n\nhostname\n\n\nip address\n\n\nload_average\n\n\nlogo\n\n\ntime\n\n\nuptime\n\n\nusers\n\n\nwhoami\n\n\n\n\nvim\n\n\nVia pkg, there are two options: vim and vim-lite. Note vim will pull\nin a whole bunch of gui dependancies, but vim-lite is not build with python.\n\n\nFor instance, powerline will not work with vim-lite because it's not built with\npython. Also, vim-youcompleteme will not work with vim-lite. However, lightline\nwill work with vim-lite, and VimCompletesMe will work with vim-lite.\n\n\nTo get lightline working update $TERM\n\n\n# ~/.config/fish/config.fish\nexport TERM=xterm-256color\n\n\n\n\nAnd vimrc\n\n\n# ~/.vimrc\nset ls=2\n\n\n\n\nAnother option is to build vim from source via ports. You can prevent vim\nfrom pulling in a bunch of gui dependancies with the following in /etc/make.conf.\n\n\n# /etc/make.conf\nWITHOUT_X11=yes\n\n\n\n\nAnd then when you compile vim from ports, run \nmake config\n where you can enable\npython.\n\n\npython\n\n\nFor python3 virtualenv\n\n\nvirtualenv-3.6 ", "title": "FreeBSD Jails on FreeNAS" }, { "location": "/freebsd_jails_on_freenas/#freebsd-jails-on-freenas", "text": "Mostly a personal distillation for getting a FreeBSD\nJail up and running on FreeNAS.", "title": "FreeBSD Jails on FreeNAS" }, { "location": "/freebsd_jails_on_freenas/#in-the-freenas-webgui-create-a-new-jail", "text": "The default networking configuration, will give\nyour jail an ip address on the lan. For now, I've\ndecided to just share a pkg cache with each jail.\nNavigate to Jails -> Storage -> Add Storage and\nadd the pkg storage directory to /var/cache/pkg \ninside the jail. For instance, on my local FreeNAS server,\nthe pkg directory is at /mnt/VolumeOne/pkg/. If you ssh into the host server, you can type the command jls , to list the jails. Based on the output of the\ncommand jls , you can get a shell with jexec \nof jexec .", "title": "In The FreeNAS WebGui, Create A New Jail" }, { "location": "/freebsd_jails_on_freenas/#updating", "text": "How about the command pkg audit -F ? Downloads a\nlist of known security issues and checks your system\nagainst that. I would recommend, to myself anyway, to shell into\nthe new jail with jexec , run pkg upgrade to install any new packages,\nand then from the FreeNAS webgui, restart the jail. Although\nthe restarted jail will have a new jail number as reported by\nthe jls command.", "title": "updating" }, { "location": "/freebsd_jails_on_freenas/#locale", "text": "When you use jexec to get a shell, you get an environment\nwith an utf_8 locale. Not so if you ssh into the new jail.\nFor this put the following contents into ~/.login_conf # ~/.login_conf\nme:\\\n :charset=UTF-8:\\\n :lang=en_US.UTF-8:\\\n :setenv=LC_COLLATE=C:", "title": "locale" }, { "location": "/freebsd_jails_on_freenas/#ssh", "text": "To get ssh running, edit /etc/rc.conf inside the jail. # /etc/rc.conf\nsshd_enable=\"YES\" To start sshd immediately, make any necessary edits to\n/etc/ssh/sshd_config, and run the following command. service sshd start", "title": "ssh" }, { "location": "/freebsd_jails_on_freenas/#byobu", "text": "You'll need newt to configure byobu, and if you don't install tmux\nthen screen will become the backend. pkg install byobu tmux newt If you execute byobu-config , by pressing f9 , the\nfollowing options seem to work. Some options, of course,\nwill prevent others from working so you have to enable them\none at a time to see what happens. date disk distro hostname ip address load_average logo time uptime users whoami", "title": "Byobu" }, { "location": "/freebsd_jails_on_freenas/#vim", "text": "Via pkg, there are two options: vim and vim-lite. Note vim will pull\nin a whole bunch of gui dependancies, but vim-lite is not build with python. For instance, powerline will not work with vim-lite because it's not built with\npython. Also, vim-youcompleteme will not work with vim-lite. However, lightline\nwill work with vim-lite, and VimCompletesMe will work with vim-lite. To get lightline working update $TERM # ~/.config/fish/config.fish\nexport TERM=xterm-256color And vimrc # ~/.vimrc\nset ls=2 Another option is to build vim from source via ports. You can prevent vim\nfrom pulling in a bunch of gui dependancies with the following in /etc/make.conf. # /etc/make.conf\nWITHOUT_X11=yes And then when you compile vim from ports, run make config where you can enable\npython.", "title": "vim" }, { "location": "/freebsd_jails_on_freenas/#python", "text": "For python3 virtualenv virtualenv-3.6 ", "title": "python" }, { "location": "/arch_redis_nspawn/", "text": "Quick Dirty Redis Nspawn Container on Arch Linux\n\n\nRefer to the \nNspawn\n page for setting up the nspawn container,\ninstall redis, and start/enable redis.service.\nOnce you have the container running, it seems all you have to do to get\nthings working in a container subnet is to change the bind address.\n\n\n# /etc/redis.conf\n# bind 127.0.0.1\nbind 0.0.0.0\n\n\n\n\nyou can nmap port 6379, be sure to restart redis\n\n\nAgain I would refer you to the Arch Wiki", "title": "Quick Dirty Redis Nspawn Container on Arch Linux" }, { "location": "/arch_redis_nspawn/#quick-dirty-redis-nspawn-container-on-arch-linux", "text": "Refer to the Nspawn page for setting up the nspawn container,\ninstall redis, and start/enable redis.service.\nOnce you have the container running, it seems all you have to do to get\nthings working in a container subnet is to change the bind address. # /etc/redis.conf\n# bind 127.0.0.1\nbind 0.0.0.0 you can nmap port 6379, be sure to restart redis Again I would refer you to the Arch Wiki", "title": "Quick Dirty Redis Nspawn Container on Arch Linux" }, { "location": "/arch_postgresql_nspawn/", "text": "Quick Dirty Postgresql Nspawn Container on Arch Linux\n\n\nRefer to the \nNspawn\n page for setting up the nspawn container.\n\nAnd then refer the \nArchWiki instructions\n\nfor postgresql. \n\n\nYou'll want to install postgresql, set a password for the default user \npostgres\n,\nand then login as postgres and initilize the database. \n\n\npacman -S postgresql\n# passwd for postgresql user \npasswd postgres \n# login as postgres \nsu -l postgres\n# initialize the databse cluster\n[postgres]$ initdb --locale $LANG -E UTF8 -D '/var/lib/postgres/data'\n\n\n\n\nYou'll need to configure \n/var/lib/postgres/data/pg_hba.conf\n and\n\n/var/lib/postgres/data/postgresql.conf\n for remote access,\npresumably with an identd daemon in mind. The ident daemon will\nlisten on port 113, not on the machine with the database server,\nbut it listens from the machine where is the client that remotely\nwants to access the database.", "title": "Quick Dirty Postgresql Nspawn Container on Arch Linux" }, { "location": "/arch_postgresql_nspawn/#quick-dirty-postgresql-nspawn-container-on-arch-linux", "text": "Refer to the Nspawn page for setting up the nspawn container. \nAnd then refer the ArchWiki instructions \nfor postgresql. You'll want to install postgresql, set a password for the default user postgres ,\nand then login as postgres and initilize the database. pacman -S postgresql\n# passwd for postgresql user \npasswd postgres \n# login as postgres \nsu -l postgres\n# initialize the databse cluster\n[postgres]$ initdb --locale $LANG -E UTF8 -D '/var/lib/postgres/data' You'll need to configure /var/lib/postgres/data/pg_hba.conf and /var/lib/postgres/data/postgresql.conf for remote access,\npresumably with an identd daemon in mind. The ident daemon will\nlisten on port 113, not on the machine with the database server,\nbut it listens from the machine where is the client that remotely\nwants to access the database.", "title": "Quick Dirty Postgresql Nspawn Container on Arch Linux" }, { "location": "/self_signed_certs/", "text": "Setting up Self-Signed Certs\n\n\nThis \njamielinux\n\nblog post looks promising.", "title": "Self Signed Certs" }, { "location": "/self_signed_certs/#setting-up-self-signed-certs", "text": "This jamielinux \nblog post looks promising.", "title": "Setting up Self-Signed Certs" } ] }