From 05e8f4229ce64a7680cb597ce9912e31a8849caf Mon Sep 17 00:00:00 2001 From: edolstra Date: Wed, 3 Jun 2020 13:02:48 +0000 Subject: [PATCH] Update flake.lock and blogs.xml [ci skip] --- blogs.xml | 573 +++++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 549 insertions(+), 24 deletions(-) diff --git a/blogs.xml b/blogs.xml index 9e196c70c0..81f0aefc24 100644 --- a/blogs.xml +++ b/blogs.xml @@ -98,6 +98,555 @@ refreshing bevvy while you wait for blocks to sync.</p> Sat, 18 Apr 2020 00:00:00 +0000 support@nixbuild.net (nixbuild.net) + + Graham Christensen: Erase your darlings + http://grahamc.com//blog/erase-your-darlings + http://grahamc.com/blog/erase-your-darlings + <p>I erase my systems at every boot.</p> + +<p>Over time, a system collects state on its root partition. This state +lives in assorted directories like <code class="highlighter-rouge">/etc</code> and <code class="highlighter-rouge">/var</code>, and represents +every under-documented or out-of-order step in bringing up the +services.</p> + +<blockquote> + <p>“Right, run <code class="highlighter-rouge">myapp-init</code>.”</p> +</blockquote> + +<p>These small, inconsequential “oh, oops” steps are the pieces that get +lost and don’t appear in your runbooks.</p> + +<blockquote> + <p>“Just download ca-certificates to … to fix …”</p> +</blockquote> + +<p>Each of these quick fixes leaves you doomed to repeat history in three +years when you’re finally doing that dreaded RHEL 7 to RHEL 8 upgrade.</p> + +<blockquote> + <p>“Oh, <code class="highlighter-rouge">touch /etc/ipsec.secrets</code> or the l2tp tunnel won’t work.”</p> +</blockquote> + +<h3 id="immutable-infrastructure-gets-us-so-close">Immutable infrastructure gets us <em>so</em> close</h3> + +<p>Immutable infrastructure is a wonderfully effective method of +eliminating so many of these forgotten steps. Leaning in to the pain +by deleting and replacing your servers on a weekly or monthly basis +means you are constantly testing and exercising your automation and +runbooks.</p> + +<p>The nugget here is the regular and indiscriminate removal of system +state. Destroying the whole server doesn’t leave you much room to +forget the little tweaks you made along the way.</p> + +<p>These techniques work great when you meet two requirements:</p> + +<ul> + <li>you can provision and destroy servers with an API call</li> + <li>the servers aren’t inherently stateful</li> +</ul> + +<h4 id="long-running-servers">Long running servers</h4> + +<p>There are lots of cases in which immutable infrastructure <em>doesn’t</em> +work, and the dirty secret is <strong>those servers need good tools the +most.</strong></p> + +<p>Long-running servers cause long outages. Their runbooks are outdated +and incomplete. They accrete tweaks and turn in to an ossified, +brittle snowflake — except its arms are load-bearing.</p> + +<p>Let’s bring the ideas of immutable infrastructure to these systems +too. Whether this system is embedded in a stadium’s jumbotron, in a +datacenter, or under your desk, we <em>can</em> keep the state under control.</p> + +<h4 id="fhs-isnt-enough">FHS isn’t enough</h4> + +<p>The hard part about applying immutable techniques to long running +servers is knowing exactly where your application state ends and the +operating system, software, and configuration begin.</p> + +<p>This is hard because legacy operating systems and the Filesystem +Hierarchy Standard poorly separate these areas of concern. For +example, <code class="highlighter-rouge">/var/lib</code> is for state information, but how much of this do +you actually care about tracking? What did you configure in <code class="highlighter-rouge">/etc</code> on +purpose?</p> + +<p>The answer is probably not a lot.</p> + +<p>You may not care, but all of this accumulation of junk is a tarpit. +Everything becomes harder: replicating production, testing changes, +undoing mistakes.</p> + +<h3 id="new-computer-smell">New computer smell</h3> + +<p>Getting a new computer is this moment of cleanliness. The keycaps +don’t have oils on them, the screen is perfect, and the hard drive +is fresh and unspoiled — for about an hour or so.</p> + +<p>Let’s get back to that.</p> + +<h2 id="how-is-this-possible">How is this possible?</h2> + +<p>NixOS can boot with only two directories: <code class="highlighter-rouge">/boot</code>, and <code class="highlighter-rouge">/nix</code>.</p> + +<p><code class="highlighter-rouge">/nix</code> contains read-only system configurations, which are specified +by your <code class="highlighter-rouge">configuration.nix</code> and are built and tracked as system +generations. These never change. Once the files are created in <code class="highlighter-rouge">/nix</code>, +the only way to change the config’s contents is to build a new system +configuration with the contents you want.</p> + +<p>Any configuration or files created on the drive outside of <code class="highlighter-rouge">/nix</code> is +state and cruft. We can lose everything outside of <code class="highlighter-rouge">/nix</code> and <code class="highlighter-rouge">/boot</code> +and have a healthy system. My technique is to explicitly opt in and +<em>choose</em> which state is important, and only keep that.</p> + +<p>How this is possible comes down to the boot sequence.</p> + +<p>For NixOS, the bootloader follows the same basic steps as a standard +Linux distribution: the kernel starts with an initial ramdisk, and the +initial ramdisk mounts the system disks.</p> + +<p>And here is where the similarities end.</p> + +<h3 id="nixoss-early-startup">NixOS’s early startup</h3> + +<p>NixOS configures the bootloader to pass some extra information: a +specific system configuration. This is the secret to NixOS’s +bootloader rollbacks, and also the key to erasing our disk on each +boot. The parameter is named <code class="highlighter-rouge">systemConfig</code>.</p> + +<p>On every startup the very early boot stage knows what the system’s +configuration should be: the entire system configuration is stored in +the read-only <code class="highlighter-rouge">/nix/store</code>, and the directory passed through +<code class="highlighter-rouge">systemConfig</code> has a reference to the config. Early boot then +manipulates <code class="highlighter-rouge">/etc</code> and <code class="highlighter-rouge">/run</code> to match the chosen setup. Usually this +involves swapping out a few symlinks.</p> + +<p>If <code class="highlighter-rouge">/etc</code> simply doesn’t exist, however, early boot <em>creates</em> <code class="highlighter-rouge">/etc</code> +and moves on like it were any other boot. It also <em>creates</em> <code class="highlighter-rouge">/var</code>, +<code class="highlighter-rouge">/dev</code>, <code class="highlighter-rouge">/home</code>, and any other core directories that must be present.</p> + +<p>Simply speaking, an empty <code class="highlighter-rouge">/</code> is <em>not surprising</em> to NixOS. In fact, +the NixOS netboot, EC2, and installation media all start out this way.</p> + +<h2 id="opting-out">Opting out</h2> + +<p>Before we can opt in to saving data, we must opt out of saving data +<em>by default</em>. I do this by setting up my filesystem in a way that +lets me easily and safely erase the unwanted data, while preserving +the data I do want to keep.</p> + +<p>My preferred method for this is using a ZFS dataset and rolling it +back to a blank snapshot before it is mounted. A partition of any +other filesystem would work just as well too, running <code class="highlighter-rouge">mkfs</code> at boot, +or something similar. If you have a lot of RAM, you could skip the +erase step and make <code class="highlighter-rouge">/</code> a tmpfs.</p> + +<h3 id="opting-out-with-zfs">Opting out with ZFS</h3> +<p>When installing NixOS, I partition my disk with two partitions, one +for the boot partition, and another for a ZFS pool. Then I create and +mount a few datasets.</p> + +<p>My root dataset:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># zfs create -p -o mountpoint=legacy rpool/local/root +</code></pre></div></div> + +<p>Before I even mount it, I <strong>create a snapshot while it is totally +blank</strong>:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># zfs snapshot rpool/local/root@blank +</code></pre></div></div> + +<p>And then mount it:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mount -t zfs rpool/local/root /mnt +</code></pre></div></div> + +<p>Then I mount the partition I created for the <code class="highlighter-rouge">/boot</code>:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mkdir /mnt/boot +# mount /dev/the-boot-partition /mnt/boot +</code></pre></div></div> + +<p>Create and mount a dataset for <code class="highlighter-rouge">/nix</code>:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># zfs create -p -o mountpoint=legacy rpool/local/nix +# mkdir /mnt/nix +# mount -t zfs rpool/local/nix /mnt/nix +</code></pre></div></div> + +<p>And a dataset for <code class="highlighter-rouge">/home</code>:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># zfs create -p -o mountpoint=legacy rpool/safe/home +# mkdir /mnt/home +# mount -t zfs rpool/safe/home /mnt/home +</code></pre></div></div> + +<p>And finally, a dataset explicitly for state I want to persist between +boots:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># zfs create -p -o mountpoint=legacy rpool/safe/persist +# mkdir /mnt/persist +# mount -t zfs rpool/safe/persist /mnt/persist +</code></pre></div></div> + +<blockquote> + <p><em>Note:</em> in my systems, datasets under <code class="highlighter-rouge">rpool/local</code> are never backed +up, and datasets under <code class="highlighter-rouge">rpool/safe</code> are.</p> +</blockquote> + +<p>And now safely erasing the root dataset on each boot is very easy: +after devices are made available, roll back to the blank snapshot:</p> + +<div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> + <span class="nv">boot</span><span class="o">.</span><span class="nv">initrd</span><span class="o">.</span><span class="nv">postDeviceCommands</span> <span class="o">=</span> <span class="nv">lib</span><span class="o">.</span><span class="nv">mkAfter</span> <span class="s2">''</span><span class="err"> +</span><span class="s2"> zfs rollback -r rpool/local/root@blank</span><span class="err"> +</span><span class="s2"> ''</span><span class="p">;</span> +<span class="p">}</span> +</code></pre></div></div> + +<p>I then finish the installation as normal. If all goes well, your +next boot will start with an empty root partition but otherwise be +configured exactly as you specified.</p> + +<h2 id="opting-in">Opting in</h2> + +<p>Now that I’m keeping no state, it is time to specify what I do want +to keep. My choices here are different based on the role of the +system: a laptop has different state than a server.</p> + +<p>Here are some different pieces of state and how I preserve them. These +examples largely use reconfiguration or symlinks, but using ZFS +datasets and mount points would work too.</p> + +<h4 id="wireguard-private-keys">Wireguard private keys</h4> + +<p>Create a directory under <code class="highlighter-rouge">/persist</code> for the key:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mkdir -p /persist/etc/wireguard/ +</code></pre></div></div> + +<p>And use Nix’s wireguard module to generate the key there:</p> + +<div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> + <span class="nv">networking</span><span class="o">.</span><span class="nv">wireguard</span><span class="o">.</span><span class="nv">interfaces</span><span class="o">.</span><span class="nv">wg0</span> <span class="o">=</span> <span class="p">{</span> + <span class="nv">generatePrivateKeyFile</span> <span class="o">=</span> <span class="kc">true</span><span class="p">;</span> + <span class="nv">privateKeyFile</span> <span class="o">=</span> <span class="s2">"/persist/etc/wireguard/wg0"</span><span class="p">;</span> + <span class="p">};</span> +<span class="p">}</span> +</code></pre></div></div> + +<h4 id="networkmanager-connections">NetworkManager connections</h4> + +<p>Create a directory under <code class="highlighter-rouge">/persist</code>, mirroring the <code class="highlighter-rouge">/etc</code> structure:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mkdir -p /persist/etc/NetworkManager/system-connections +</code></pre></div></div> + +<p>And use Nix’s <code class="highlighter-rouge">etc</code> module to set up the symlink:</p> + +<div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> + <span class="nv">etc</span><span class="o">.</span><span class="s2">"NetworkManager/system-connections"</span> <span class="o">=</span> <span class="p">{</span> + <span class="nv">source</span> <span class="o">=</span> <span class="s2">"/persist/etc/NetworkManager/system-connections/"</span><span class="p">;</span> + <span class="p">};</span> +<span class="p">}</span> +</code></pre></div></div> + +<h4 id="bluetooth-devices">Bluetooth devices</h4> + +<p>Create a directory under <code class="highlighter-rouge">/persist</code>, mirroring the <code class="highlighter-rouge">/var</code> structure:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mkdir -p /persist/var/lib/bluetooth +</code></pre></div></div> + +<p>And then use systemd’s tmpfiles.d rules to create a symlink from +<code class="highlighter-rouge">/var/lib/bluetooth</code> to my persisted directory:</p> + +<div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> + <span class="nv">systemd</span><span class="o">.</span><span class="nv">tmpfiles</span><span class="o">.</span><span class="nv">rules</span> <span class="o">=</span> <span class="p">[</span> + <span class="s2">"L /var/lib/bluetooth - - - - /persist/var/lib/bluetooth"</span> + <span class="p">];</span> +<span class="p">}</span> +</code></pre></div></div> + +<h4 id="ssh-host-keys">SSH host keys</h4> + +<p>Create a directory under <code class="highlighter-rouge">/persist</code>, mirroring the <code class="highlighter-rouge">/etc</code> structure:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mkdir -p /persist/etc/ssh +</code></pre></div></div> + +<p>And use Nix’s openssh module to create and use the keys in that +directory:</p> + +<div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> + <span class="nv">services</span><span class="o">.</span><span class="nv">openssh</span> <span class="o">=</span> <span class="p">{</span> + <span class="nv">enable</span> <span class="o">=</span> <span class="kc">true</span><span class="p">;</span> + <span class="nv">hostKeys</span> <span class="o">=</span> <span class="p">[</span> + <span class="p">{</span> + <span class="nv">path</span> <span class="o">=</span> <span class="s2">"/persist/ssh/ssh_host_ed25519_key"</span><span class="p">;</span> + <span class="nv">type</span> <span class="o">=</span> <span class="s2">"ed25519"</span><span class="p">;</span> + <span class="p">}</span> + <span class="p">{</span> + <span class="nv">path</span> <span class="o">=</span> <span class="s2">"/persist/ssh/ssh_host_rsa_key"</span><span class="p">;</span> + <span class="nv">type</span> <span class="o">=</span> <span class="s2">"rsa"</span><span class="p">;</span> + <span class="nv">bits</span> <span class="o">=</span> <span class="mi">4096</span><span class="p">;</span> + <span class="p">}</span> + <span class="p">];</span> + <span class="p">};</span> +<span class="p">}</span> +</code></pre></div></div> + +<h4 id="acme-certificates">ACME certificates</h4> + +<p>Create a directory under <code class="highlighter-rouge">/persist</code>, mirroring the <code class="highlighter-rouge">/var</code> structure:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># mkdir -p /persist/var/lib/acme +</code></pre></div></div> + +<p>And then use systemd’s tmpfiles.d rules to create a symlink from +<code class="highlighter-rouge">/var/lib/acme</code> to my persisted directory:</p> + +<div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> + <span class="nv">systemd</span><span class="o">.</span><span class="nv">tmpfiles</span><span class="o">.</span><span class="nv">rules</span> <span class="o">=</span> <span class="p">[</span> + <span class="s2">"L /var/lib/acme - - - - /persist/var/lib/acme"</span> + <span class="p">];</span> +<span class="p">}</span> +</code></pre></div></div> + +<h3 id="answering-the-question-what-am-i-about-to-lose">Answering the question “what am I about to lose?”</h3> + +<p>I found this process a bit scary for the first few weeks: was I losing +important data each reboot? No, I wasn’t.</p> + +<p>If you’re worried and want to know what state you’ll lose on the next +boot, you can list the files on your root filesystem and see if you’re +missing something important:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># tree -x / +├── bin +│   └── sh -&gt; /nix/store/97zzcs494vn5k2yw-dash-0.5.10.2/bin/dash +├── boot +├── dev +├── etc +│   ├── asound.conf -&gt; /etc/static/asound.conf +... snip ... +</code></pre></div></div> + +<p>ZFS can give you a similar answer:</p> + +<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># zfs diff rpool/local/root@blank +M / ++ /nix ++ /etc ++ /root ++ /var/lib/is-nix-channel-up-to-date ++ /etc/pki/fwupd ++ /etc/pki/fwupd-metadata +... snip ... +</code></pre></div></div> + +<h2 id="your-stateless-future">Your stateless future</h2> + +<p>You may bump in to new state you meant to be preserving. When I’m +adding new services, I think about the state it is writing and whether +I care about it or not. If I care, I find a way to redirect its state +to <code class="highlighter-rouge">/persist</code>.</p> + +<p>Take care to reboot these machines on a somewhat regular basis. It +will keep things agile, proving your system state is tracked +correctly.</p> + +<p>This technique has given me the “new computer smell” on every boot +without the datacenter full of hardware, and even on systems that do +carry important state. I have deployed this strategy to systems in the +large and small: build farm servers, database servers, my NAS and home +server, my raspberry pi garage door opener, and laptops.</p> + +<p>NixOS enables powerful new deployment models in so many ways, allowing +for systems of all shapes and sizes to be managed properly and +consistently. I think this model of ephemeral roots is yet +another example of this flexibility and power. I would like to see +this partitioning scheme become a reference architecture and take us +out of this eternal tarpit of legacy.</p> + Mon, 13 Apr 2020 00:00:00 +0000 + + + Graham Christensen: ZFS Datasets for NixOS + http://grahamc.com//blog/nixos-on-zfs + http://grahamc.com/blog/nixos-on-zfs + <p>The outdated and historical nature of the <a href="https://grahamc.com/feed/fhs">Filesystem Hierarchy +Standard</a> means traditional Linux distributions have to go to great +lengths to separate “user data” from “system data.”</p> + +<p>NixOS’s filesystem architecture does cleanly separate user data from +system data, and has a much easier job to do.</p> + +<h3 id="traditional-linuxes">Traditional Linuxes</h3> + +<p>Because FHS mixes these two concerns across the entire hierarchy, +splitting these concerns requires identifying every point across +dozens of directories where the data is the system’s or the user’s. +When adding ZFS to the mix, the installers typically have to create +over a dozen datasets to accomplish this.</p> + +<p>For example, Ubuntu’s upcoming ZFS support creates 16 datasets:</p> + +<pre><code class="language-tree">rpool/ +├── ROOT +│   └── ubuntu_lwmk7c +│   ├── log +│   ├── mail +│   ├── snap +│   ├── spool +│   ├── srv +│   ├── usr +│   │   └── local +│   ├── var +│   │   ├── games +│   │   └── lib +│   │   ├── AccountServices +│   │   ├── apt +│   │   ├── dpkg +│   │   └── NetworkManager +│   └── www +└── USERDATA +</code></pre> + +<p>Going through the great pains of separating this data comes with +significant advantages: a recursive snapshot at any point in the tree +will create an atomic, point-in-time snapshot of every dataset below.</p> + +<p>This means in order to create a consistent snapshot of the system +data, an administrator would only need to take a recursive snapshot +at <code class="highlighter-rouge">ROOT</code>. The same is true for user data: take a recursive snapshot of +<code class="highlighter-rouge">USERDATA</code> and all user data is saved.</p> + +<h3 id="nixos">NixOS</h3> + +<p>Because Nix stores all of its build products in <code class="highlighter-rouge">/nix/store</code>, NixOS +doesn’t mingle these two concerns. NixOS’s runtime system, installed +packages, and rollback targets are all stored in <code class="highlighter-rouge">/nix</code>.</p> + +<p>User data is not.</p> + +<p>This removes the entire complicated tree of datasets to facilitate +FHS, and leaves us with only a few needed datasets.</p> + +<h2 id="datasets">Datasets</h2> + +<p>Design for the atomic, recursive snapshots when laying out the +datasets.</p> + +<p>In particular, I don’t back up the <code class="highlighter-rouge">/nix</code> directory. This entire +directory can always be rebuilt later from the system’s +<code class="highlighter-rouge">configuration.nix</code>, and isn’t worth the space.</p> + +<p>One way to model this might be splitting up the data into three +top-level datasets:</p> + +<pre><code class="language-tree">tank/ +├── local +│   └── nix +├── system +│   └── root +└── user + └── home +</code></pre> + +<p>In <code class="highlighter-rouge">tank/local</code>, I would store datasets that should almost never be +snapshotted or backed up. <code class="highlighter-rouge">tank/system</code> would store data that I would +want periodic snapshots for. Most importantly, <code class="highlighter-rouge">tank/user</code> would +contain data I want regular snapshots and backups for, with a long +retention policy.</p> + +<p>From here, you could add a ZFS dataset per user:</p> + +<pre><code class="language-tree">tank/ +├── local +│   └── nix +├── system +│   └── root +└── user + └── home +    ├── grahamc +    └── gustav +</code></pre> + +<p>Or a separate dataset for <code class="highlighter-rouge">/var</code>:</p> + +<pre><code class="language-tree">tank/ +├── local +│   └── nix +├── system +│   ├── var +│   └── root +└── user +</code></pre> + +<p>Importantly, this gives you three buckets for independent and +regular snapshots.</p> + +<p>The important part is having <code class="highlighter-rouge">/nix</code> under its own top-level dataset. +This makes it a “cousin” to the data you <em>do</em> want backup coverage on, +making it easier to take deep, recursive snapshots atomically.</p> + +<h2 id="properties">Properties</h2> + +<ul> + <li>Enable compression with <code class="highlighter-rouge">compression=on</code>. Specifying <code class="highlighter-rouge">on</code> instead of +<code class="highlighter-rouge">lz4</code> or another specific algorithm will always pick the best +available compression algorithm.</li> + <li>The dataset containing journald’s logs (where <code class="highlighter-rouge">/var</code> lives) should +have <code class="highlighter-rouge">xattr=sa</code> and <code class="highlighter-rouge">acltype=posixacl</code> set to allow regular users to +read their journal.</li> + <li>Nix doesn’t use <code class="highlighter-rouge">atime</code>, so <code class="highlighter-rouge">atime=off</code> on the <code class="highlighter-rouge">/nix</code> dataset is +fine.</li> + <li>NixOS requires (as of 2020-04-11) <code class="highlighter-rouge">mountpoint=legacy</code> for all +datasets. NixOS does not yet have tooling to require implicitly +created ZFS mounts to settle before booting, and <code class="highlighter-rouge">mountpoint=legacy</code> +plus explicit mount points in <code class="highlighter-rouge">hardware-configuration.nix</code> will +ensure all your datasets are mounted at the right time.</li> +</ul> + +<p>I don’t know how to pick <code class="highlighter-rouge">ashift</code>, and usually just allow ZFS to guess +on my behalf.</p> + +<h2 id="partitioning">Partitioning</h2> + +<p>I only create two partitions:</p> + +<ol> + <li><code class="highlighter-rouge">/boot</code> formatted <code class="highlighter-rouge">vfat</code> for EFI, or <code class="highlighter-rouge">ext4</code> for BIOS</li> + <li>The ZFS dataset partition.</li> +</ol> + +<p>There are spooky articles saying only give ZFS entire disks. The +truth is, you shouldn’t split a disk into two active partitions. +Splitting the disk this way is just fine, since <code class="highlighter-rouge">/boot</code> is rarely +read or written.</p> + +<blockquote> + <p><em>Note:</em> If you do partition the disk, make sure you set the disk’s +scheduler to <code class="highlighter-rouge">none</code>. ZFS takes this step automatically if it does +control the entire disk.</p> + + <p>On NixOS, you an set your scheduler to <code class="highlighter-rouge">none</code> via:</p> + + <div class="language-nix highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span> <span class="nv">boot</span><span class="o">.</span><span class="nv">kernelParams</span> <span class="o">=</span> <span class="p">[</span> <span class="s2">"elevator=none"</span> <span class="p">];</span> <span class="p">}</span> +</code></pre></div> </div> +</blockquote> + +<h1 id="clean-isolation">Clean isolation</h1> + +<p>NixOS’s clean separation of concerns reduces the amount of complexity +we need to track when considering and planning our datasets. This +gives us flexibility later, and enables some superpowers like erasing +my computer on every boot, which I’ll write about on Monday.</p> + Sat, 11 Apr 2020 00:00:00 +0000 + nixbuild.net: New nixbuild.net Resources https://blog.nixbuild.net/posts/2020-03-27-nixbuild-net-beta.html @@ -607,30 +1156,6 @@ As of today, all binary caches are served by CloudFlare CDN. Current State In order to show how it changes the handling of parameters to derivation, the first example will show the current state with __structuredAttrs set to false and the stdenv.mkDerivation wrapper around derivation. All parameters are passed to the builder as environment variables, canonicalised by Nix in imitation of shell script conventions: Mon, 20 Jan 2020 12:00:00 +0000 - - Hercules Labs: Hercules CI & Cachix split up - https://blog.hercules-ci.com/2020/01/14/hercules-ci-cachix-split-up/ - https://blog.hercules-ci.com/2020/01/14/hercules-ci-cachix-split-up/ - <p>After careful consideration of how to balance between the two products, we’ve decided to split up. Each of the two products will be a separate entity:</p> - -<ul> - <li>Hercules CI becomes part of Robert Hensing’s Ensius B.V.</li> - <li>Cachix becomes part of Domen Kožar’s Enlambda OÜ</li> -</ul> - -<p>For customers there will be no changes, except for the point of contact in support requests.</p> - -<p>Domen &amp; Robert</p> - Tue, 14 Jan 2020 00:00:00 +0000 - - - Mayflower: Windows-on-NixOS, part 1: Migrating bare-metal to a VM - https://nixos.mayflower.consulting/blog/2019/11/27/windows-vm-storage/ - https://nixos.mayflower.consulting/blog/2019/11/27/windows-vm-storage/ - This is part 1 of a series of blog posts explaining how we took an existing Windows installation on hardware and moved it into a VM running on top of NixOS. -Background We have a decently-equipped desktop PC sitting in our office, which is designated for data experiments using TensorFlow and such. During off-hours, it’s also used for games, and for that purpose it has Windows installed on it. We decided to try moving Windows into a VM within NixOS so that we could run both operating systems in parallel. - Wed, 27 Nov 2019 06:00:00 +0000 -