https://runtimeterror.dev/dynamic-robots-txt-hugo-external-data-sources/

ID da verificação
a8dc8b57-bebc-4cce-89fc-05f89f9c0822Concluído
URL enviado:
https://runtimeterror.dev/dynamic-robots-txt-hugo-external-data-sources/
Relatório concluído:

Ligações · 23 encontradas

HiperligaçãoTexto
https://github.com/ai-robots-txt/ai.robots.txtcommunity-maintainedai.robots.txt project↗
https://www.lkhrs.com/blog/2024/darkvisitors-hugo/Luke Harris's work↗
https://gohugo.io/functions/resources/getremote/resources.GetRemote function↗
https://darkvisitors.com/Dark Visitors↗
https://adam.omg.lol/Adam↗
https://coryd.dev/Cory↗
https://github.com/ai-robots-txt/ai.robots.txt/blob/main/robots.jsona JSON file↗
https://gohugo.io/templates/robots/robot.txt templating↗
https://rknight.me/blog/perplexity-ai-robotstxt-and-other-questions/just going to ignore that and scrape my content anyway↗
https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-clickblock bad bots with a single click↗

Variáveis JavaScript · 26 encontradas

NomeTipo
onbeforetoggleobject
documentPictureInPictureobject
onscrollendobject
taglinesobject
getTaglinefunction
addBackToTopfunction
storeobject
cabinobject
setThemefunction
updateToggleButtonfunction

Mensagens de registo da consola · 1 encontradas

TipoCategoriaRegisto
logjavascript
URL
https://res.runtimeterror.dev/js/theme-song.js?id=2aVjZUocjk96LELFbV5JvJjm14v
Texto
JSHandle@object

HTML

<!DOCTYPE html><html lang="en" data-theme="light"><head><style>#back-to-top{background:#000;-webkit-border-radius:50%;-moz-border-radius:50%;border-radius:50%;bottom:20px;-webkit-box-shadow:0 2px 5px 0 rgba(0,0,0,.26);-moz-box-shadow:0 2px 5px 0 rgba(0,0,0,.26);box-shadow:0 2px 5px 0 rgba(0,0,0,.26);color:#fff;cursor:pointer;display:block;height:56px;opacity:1;outline:0;position:fixed;right:20px;-webkit-tap-highlight-color:transparent;-webkit-touch-callout:none;-webkit-transition:bottom .2s,opacity .2s;-o-transition:bottom .2s,opacity .2s;-moz-transition:bottom .2s,opacity .2s;transition:bottom .2s,opacity .2s;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;width:56px;z-index:1}#back-to-top svg{display:block;fill:currentColor;height:24px;margin:16px auto 0;width:24px}#back-to-top.hidden{bottom:-56px;opacity:0}</style><title>Generate a Dynamic robots.txt File in Hugo with External Data Sources – runtimeterror</title>
<meta name="description" content="Using Hugo resources.GetRemote to fetch a list of bad bots and generate a valid robots.txt at build time."><meta property="fediverse:creator" content="@[email protected]"><meta name="viewport" content="width=device-width,initial-scale=1"><meta charset="UTF-8"><link rel="alternate" type="application/rss+xml" href="https://runtimeterror.dev/dynamic-robots-txt-hugo-external-data-sources/feed.xml" title="runtimeterror"><link rel="apple-touch-icon" sizes="180x180" href="/icons/apple-touch-icon.png"><link rel="icon" type="image/png" sizes="32x32" href="/icons/favicon-32x32.png"><link rel="icon" type="image/png" sizes="16x16" href="/icons/favicon-16x16.png"><link rel="manifest" href="/icons/site.webmanifest"><link rel="mask-icon" href="/icons/safari-pinned-tab.svg" color="#5bbad5"><link rel="shortcut icon" href="/icons/favicon.ico"><meta name="msapplication-TileColor" content="#da532c"><meta name="msapplication-config" content="/icons/browserconfig.xml"><meta name="theme-color" content="#ffffff"><meta property="og:title" content="Generate a Dynamic robots.txt File in Hugo with External Data Sources"><meta property="og:description" content="Using Hugo resources.GetRemote to fetch a list of bad bots and generate a valid robots.txt at build time."><meta property="og:type" content="article"><meta property="og:url" content="https://runtimeterror.dev/dynamic-robots-txt-hugo-external-data-sources/"><meta property="og:locale" content="en"><meta property="article:section" content="posts"><meta property="article:published_time" content="2024-08-06T16:59:39+00:00"><meta property="article:modified_time" content="2024-08-06T16:59:39+00:00"><meta property="og:image" content="https://runtimeterror.dev/dynamic-robots-txt-hugo-external-data-sources/og.png"><meta property="og:image:width" content="1200"><meta property="og:image:height" content="630"><meta name="twitter:card" content="summary_large_image"><meta name="twitter:title" content="Generate a Dynamic robots.txt File in Hugo with External Data Sources"><meta name="twitter:description" content="Using Hugo resources.GetRemote to fetch a list of bad bots and generate a valid robots.txt at build time."><meta name="twitter:image" content="https://runtimeterror.dev/dynamic-robots-txt-hugo-external-data-sources/og.png"><script>(function(){var e=localStorage.getItem("theme");e?document.documentElement.setAttribute("data-theme",e):window.matchMedia&&window.matchMedia("(prefers-color-scheme: light)").matches&&document.documentElement.setAttribute("data-theme","light")})()</script><link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.6.0/css/all.min.css" integrity="sha512-Kc323vGBEqzTmouAECnVceyQqyqdsSiqLQISBL29aUW4U/M7pSPA/gEUZQqv1cwx4OnYxTxve5UMg5GT6L4JJg==" crossorigin="anonymous"><link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/css/fork-awesome.min.css" integrity="sha256-XoaMnoYC5TH6/+ihMEnospgm0J1PM/nioxbOUdnM8HY=" crossorigin="anonymous"><link rel="stylesheet" href="https://runtimeterror.dev/css/palettes/runtimeterror.css"><link rel="stylesheet" href="https://runtimeterror.dev/css/risotto.css"><link rel="stylesheet" href="https://runtimeterror.dev/css/custom.css"><link rel="license" href="https://creativecommons.org/licenses/by-nc-sa/4.0/"><script async="" src="https://cabin.runtimeterror.dev/hello.js"></script><link href="/css/torchlight.min.css" rel="stylesheet"><link href="/css/code-copy-button.min.css" rel="stylesheet"><script src="https://res.runtimeterror.dev/js/typo.js" defer=""></script></head><body><div class="page"><header class="page__header"><nav class="page__nav main-nav"><ul><h1 class="page__logo"><a href="https://runtimeterror.dev/" class="page__logo-inner">runtimeterror</a></h1><li class="main-nav__item"><a class="nav-main-item" href="https://runtimeterror.dev/categories/backstage/" title="">backstage</a></li><li class="main-nav__item"><a class="nav-main-item" href="https://runtimeterror.dev/categories/code/" title="">code</a></li><li class="main-nav__item"><a class="nav-main-item" href="https://runtimeterror.dev/categories/self-hosting/" title="">self-hosting</a></li><li class="main-nav__item"><a class="nav-main-item" href="https://runtimeterror.dev/categories/tips/" title="">tips</a></li><li class="main-nav__item"><a class="nav-main-item" href="https://runtimeterror.dev/slashes/" title="">slashes</a></li><li class="main-nav__item"><a class="nav-main-item" href="https://notes.runtimeterror.dev" title="">notes↗</a></li></ul></nav><button id="themeToggle" class="theme-toggle" aria-label="Toggle light/dark theme" title="Toggle light/dark theme" aria-checked="false">
<span class="theme-toggle__icon theme-toggle__icon--sun"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path d="M256 160c-52.9.0-96 43.1-96 96s43.1 96 96 96 96-43.1 96-96-43.1-96-96-96zm246.4 80.5-94.7-47.3 33.5-100.4c4.5-13.6-8.4-26.5-21.9-21.9l-100.4 33.5-47.4-94.8c-6.4-12.8-24.6-12.8-31 0l-47.3 94.7L92.7 70.8c-13.6-4.5-26.5 8.4-21.9 21.9l33.5 100.4-94.7 47.4c-12.8 6.4-12.8 24.6.0 31l94.7 47.3-33.5 100.5c-4.5 13.6 8.4 26.5 21.9 21.9l100.4-33.5 47.3 94.7c6.4 12.8 24.6 12.8 31 0l47.3-94.7 100.4 33.5c13.6 4.5 26.5-8.4 21.9-21.9l-33.5-100.4 94.7-47.3c13-6.5 13-24.7.2-31.1zm-155.9 106c-49.9 49.9-131.1 49.9-181 0s-49.9-131.1.0-181 131.1-49.9 181 0 49.9 131.1.0 181z"></path></svg>
</span><span class="theme-toggle__icon theme-toggle__icon--moon"><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path d="M283.211 512c78.962.0 151.079-35.925 198.857-94.792 7.068-8.708-.639-21.43-11.562-19.35-124.203 23.654-238.262-71.576-238.262-196.954.0-72.222 38.662-138.635 101.498-174.394 9.686-5.512 7.25-20.197-3.756-22.23A258.156 258.156.0 00283.211.0c-141.309.0-256 114.511-256 256 0 141.309 114.511 256 256 256z"></path></svg></span></button></header><section class="page__body"><header class="content__header"><div class="frontmatter"><hr><table class="frontmatter"><tbody><tr><td class="label">title:</td><td class="title">Generate a Dynamic robots.txt File in Hugo with External Data Sources</td></tr><tr><td class="label">posted:</td><td class="content">2024-08-06</td></tr><tr class="frontmatter_tags"><td>tags:</td><td>["<a href="/tags">all</a>", "<a href="https://runtimeterror.dev/tags/api">api</a>", "<a href="https://runtimeterror.dev/tags/hugo">hugo</a>", "<a href="https://runtimeterror.dev/tags/meta">meta</a>"]</td></tr></tbody></table><hr></div></header><div class="content__body"><p>I shared <a href="/blocking-ai-crawlers/">back in April</a> my approach for generating a <code>robots.txt</code> file to <s>block</s> <em>discourage</em> AI crawlers from stealing my content. That setup used a static list of known-naughty user agents (derived from the <a href="https://github.com/ai-robots-txt/ai.robots.txt" rel="external">community-maintained <code>ai.robots.txt</code> project↗</a>) in my Hugo config file. It's fine (I guess) but it can be hard to keep up with the bad actors - and I'm too lazy to manually update my local copy of the list when things change.</p><p>Wouldn't it be great if I could cut out the middle man (me) and have Hugo work straight off of that remote resource? Inspired by <a href="https://www.lkhrs.com/blog/2024/darkvisitors-hugo/" rel="external">Luke Harris's work↗</a> with using Hugo's <a href="https://gohugo.io/functions/resources/getremote/" rel="external"><code>resources.GetRemote</code> function↗</a> to build a <code>robots.txt</code> from the <a href="https://darkvisitors.com/" rel="external">Dark Visitors↗</a> API, I set out to figure out how to do that for ai.robots.txt.</p><p>While I was tinkering with that, <a href="https://adam.omg.lol/" rel="external">Adam↗</a> and <a href="https://coryd.dev/" rel="external">Cory↗</a> were tinkering with a GitHub Actions workflow to streamline the addition of new agents. That repo now uses <a href="https://github.com/ai-robots-txt/ai.robots.txt/blob/main/robots.json" rel="external">a JSON file↗</a> as the Source of Truth for its agent list, and a JSON file available over HTTP looks an <em>awful</em> lot like a poor man's API to me.</p><p>So here's my updated solution.</p><p>As before, I'm taking advantage of Hugo's <a href="https://gohugo.io/templates/robots/" rel="external">robot.txt templating↗</a> to build the file. That requires the following option in my <code>config/hugo.toml</code> file:</p><div class="highlight-wrapper"><div class="highlight"><button class="copy-code-button" type="button">Copy</button><pre tabindex="0" style="color: #f8f8f2; background-color: #272822; -moz-tab-size: 4; -o-tab-size: 4; tab-size: 4; background-color: #FAFAFA; --theme-selection-background: #CCD7DA80;" class="torchlight" data-torchlight-processed="3449c9e5e332f1dbb81505cd739fbf3f"><code class="language-toml" data-lang="toml"><!-- Syntax highlighted by torchlight.dev --><div class="line"><span style="color: #90A4AE;">enableRobotsTXT </span><span style="color: #39ADB5;">=</span><span style="color: #90A4AE;"> </span><span style="color: #FF5370;">true</span></div><textarea data-torchlight-original="true" style="display: none !important;">&lt;span style="display:flex"&gt;&lt;span&gt;&lt;span style="color:#a6e22e"&gt;enableRobotsTXT&lt;/span&gt; = &lt;span style="color:#66d9ef"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;</textarea></code></pre></div></div><p>That tells Hugo to process <code>layouts/robots.txt</code>, which I have set up with this content to insert the sitemap and greet robots who aren't assholes:</p><div class="highlight-wrapper"><div class="highlight"><button class="copy-code-button" type="button">Copy</button><pre tabindex="0" style="color: #f8f8f2; background-color: #272822; -moz-tab-size: 4; -o-tab-size: 4; tab-size: 4; background-color: #FAFAFA; --theme-selection-background: #CCD7DA80;" class="torchlight" data-torchlight-processed="3449c9e5e332f1dbb81505cd739fbf3f"><code class="language-text" data-lang="text"><!-- Syntax highlighted by torchlight.dev --><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 1</span><span style="color: #90A4AE;">Sitemap: {{ .Site.BaseURL }}sitemap.xml</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 2</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 3</span><span style="color: #90A4AE;"># hello robots [^_^]</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 4</span><span style="color: #90A4AE;"># let's be friends &lt;3</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 5</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 6</span><span style="color: #90A4AE;">User-agent: *</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 7</span><span style="color: #90A4AE;">Disallow:</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 8</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 9</span><span style="color: #90A4AE;"># except for these bots which are not friends:</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">10</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">11</span><span style="color: #90A4AE;">{{ partial "bad-robots.html" . }}</span></div><textarea data-torchlight-original="true" style="display: none !important;">&lt;span style="display:flex"&gt;&lt;span&gt;# torchlight! {"lineNumbers":true}
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;Sitemap: {{ .Site.BaseURL }}sitemap.xml
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;# hello robots [^_^]
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;# let's be friends &lt;3
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: *
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;Disallow:
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;# except for these bots which are not friends:
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;{{ partial "bad-robots.html" . }}
&lt;/span&gt;&lt;/span&gt;</textarea></code></pre></div></div><p>I opted to break the heavy lifting out into <code>layouts/partials/bad-robots.html</code> to keep things a bit tidier in the main template. This starts out simply enough with using <code>resources.GetRemote</code> to fetch the desired JSON file, and printing an error if that doesn't work:</p><div class="highlight-wrapper"><div class="highlight"><button class="copy-code-button" type="button">Copy</button><pre tabindex="0" style="color: #f8f8f2; background-color: #272822; -moz-tab-size: 4; -o-tab-size: 4; tab-size: 4; background-color: #FAFAFA; --theme-selection-background: #CCD7DA80;" class="torchlight" data-torchlight-processed="3449c9e5e332f1dbb81505cd739fbf3f"><code class="language-jinja" data-lang="jinja"><!-- Syntax highlighted by torchlight.dev --><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">1</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> $url </span><span style="color: #39ADB5;">:=</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json</span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;"> </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">2</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> with resources</span><span style="color: #39ADB5;">.</span><span style="color: #90A4AE;">GetRemote $url </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">3</span><span style="color: #90A4AE;">	</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> with </span><span style="color: #39ADB5;">.</span><span style="color: #90A4AE;">Err </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">4</span><span style="color: #90A4AE;">		</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> errorf </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">%s</span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">.</span><span style="color: #90A4AE;"> </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">5</span><span style="color: #90A4AE;">	</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">else</span><span style="color: #90A4AE;"> </span><span style="color: #E2931D;">-}}</span></div><textarea data-torchlight-original="true" style="display: none !important;">&lt;span style="display:flex"&gt;&lt;span&gt;# torchlight! {"lineNumbers":true}
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#e6db74&gt;"https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json"&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; resources.GetRemote &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; .Err -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;		&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"%s"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;.&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;url &lt;span style="color:#f92672"&gt;:=&lt;/span&gt; &lt;span style="color:#e6db74"&gt;"https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json"&lt;/span&gt; -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; &lt;span style="color:#66d9ef"&gt;with&lt;/span&gt; resources.GetRemote &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#e6db74&gt;"https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json"&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; resources.GetRemote &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; .Err -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;		&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"%s"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;.&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;url -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;	&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; &lt;span style="color:#66d9ef"&gt;with&lt;/span&gt; .Err -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;		&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; errorf &lt;span style="color:#e6db74"&gt;"%s"&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;.&lt;/span&gt; -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;	&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; &lt;span style="color:#66d9ef"&gt;else&lt;/span&gt; -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;</textarea></code></pre></div></div><p>The JSON file looks a bit like this, with the user agent strings as the top-level keys:</p><div class="highlight-wrapper"><div class="highlight"><button class="copy-code-button" type="button">Copy</button><pre tabindex="0" style="color: #f8f8f2; background-color: #272822; -moz-tab-size: 4; -o-tab-size: 4; tab-size: 4; background-color: #FAFAFA; --theme-selection-background: #CCD7DA80;" class="torchlight has-focus-lines" data-torchlight-processed="3449c9e5e332f1dbb81505cd739fbf3f"><code class="language-json" data-lang="json"><!-- Syntax highlighted by torchlight.dev --><div class="line"><span style="color: #39ADB5;">{</span></div><div class="line line-focus"><span style="color: #90A4AE;">    </span><span style="color: #39ADB5;">"</span><span style="color: #9C3EDA;">Amazonbot</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">{</span><span style="color: #90A4AE;"> </span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">operator</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Amazon</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">,</span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">respect</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Yes</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">,</span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">function</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Service improvement and enabling answers for Alexa users.</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">,</span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">frequency</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">No information. provided.</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">,</span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">description</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses.</span><span style="color: #39ADB5;">"</span></div><div class="line"><span style="color: #90A4AE;">    </span><span style="color: #39ADB5;">},</span></div><div class="line line-focus"><span style="color: #90A4AE;">    </span><span style="color: #39ADB5;">"</span><span style="color: #9C3EDA;">anthropic-ai</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">{</span><span style="color: #90A4AE;"> </span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">operator</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">[Anthropic](https:</span><span style="color: #90A4AE;">\/\/</span><span style="color: #91B859;">www.anthropic.com)</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">,</span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">respect</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Unclear at this time.</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">,</span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">function</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Scrapes data to train Anthropic's AI products.</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">,</span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">frequency</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">No information. provided.</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">,</span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">description</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Scrapes data to train LLMs and AI products offered by Anthropic.</span><span style="color: #39ADB5;">"</span></div><div class="line"><span style="color: #90A4AE;">    </span><span style="color: #39ADB5;">},</span></div><div class="line line-focus"><span style="color: #90A4AE;">    </span><span style="color: #39ADB5;">"</span><span style="color: #9C3EDA;">Applebot-Extended</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">{</span><span style="color: #90A4AE;"> </span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">operator</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">[Apple](https:</span><span style="color: #90A4AE;">\/\/</span><span style="color: #91B859;">support.apple.com</span><span style="color: #90A4AE;">\/</span><span style="color: #91B859;">en-us</span><span style="color: #90A4AE;">\/</span><span style="color: #91B859;">119829#datausage)</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">,</span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">respect</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Yes</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">,</span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">function</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others.</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">,</span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">frequency</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Unclear at this time.</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">,</span></div><div class="line"><span style="color: #90A4AE;">        </span><span style="color: #39ADB5;">"</span><span style="color: #E2931D;">description</span><span style="color: #39ADB5;">"</span><span style="color: #39ADB5;">:</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools.</span><span style="color: #39ADB5;">"</span></div><div class="line"><span style="color: #90A4AE;">    </span><span style="color: #39ADB5;">},</span></div><div class="line"><span style="color: #90A4AE;">    {...</span><span style="color: #39ADB5;">}</span></div><div class="line"><span style="color: #90A4AE;">}</span></div><textarea data-torchlight-original="true" style="display: none !important;">&lt;span style="display:flex"&gt;&lt;span&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#f92672"&gt;"Amazonbot"&lt;/span&gt;: { &lt;span style="color:#75715e"&gt;// [tl! **]
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;&lt;/span&gt;        &lt;span style="color:#f92672"&gt;"operator"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"Amazon"&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;        &lt;span style="color:#f92672"&gt;"respect"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"Yes"&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;        &lt;span style="color:#f92672"&gt;"function"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"Service improvement and enabling answers for Alexa users."&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;        &lt;span style="color:#f92672"&gt;"frequency"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"No information. provided."&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;        &lt;span style="color:#f92672"&gt;"description"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses."&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    },
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#f92672"&gt;"anthropic-ai"&lt;/span&gt;: { &lt;span style="color:#75715e"&gt;// [tl! **]
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;&lt;/span&gt;        &lt;span style="color:#f92672"&gt;"operator"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"[Anthropic](https:\/\/www.anthropic.com)"&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;        &lt;span style="color:#f92672"&gt;"respect"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"Unclear at this time."&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;        &lt;span style="color:#f92672"&gt;"function"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"Scrapes data to train Anthropic's AI products."&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;        &lt;span style="color:#f92672"&gt;"frequency"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"No information. provided."&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;        &lt;span style="color:#f92672"&gt;"description"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"Scrapes data to train LLMs and AI products offered by Anthropic."&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    },
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#f92672"&gt;"Applebot-Extended"&lt;/span&gt;: { &lt;span style="color:#75715e"&gt;// [tl! **]
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;&lt;/span&gt;        &lt;span style="color:#f92672"&gt;"operator"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"[Apple](https:\/\/support.apple.com\/en-us\/119829#datausage)"&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;        &lt;span style="color:#f92672"&gt;"respect"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"Yes"&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;        &lt;span style="color:#f92672"&gt;"function"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others."&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;        &lt;span style="color:#f92672"&gt;"frequency"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"Unclear at this time."&lt;/span&gt;,
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;        &lt;span style="color:#f92672"&gt;"description"&lt;/span&gt;: &lt;span style="color:#e6db74"&gt;"Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools."&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    },
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#960050;background-color:#1e0010"&gt;{...&lt;/span&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;&lt;span style="color:#960050;background-color:#1e0010"&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;</textarea></code></pre></div></div><p>There's quite a bit more detail in this JSON than I really care about; all I need for this are the bot names. So I unmarshal the JSON data, iterate through the top-level keys to extract the names, and print a line starting with <code>User-agent: </code>followed by the name for each bot.</p><div class="highlight-wrapper"><div class="highlight"><button class="copy-code-button" type="button">Copy</button><pre tabindex="0" style="color: #f8f8f2; background-color: #272822; -moz-tab-size: 4; -o-tab-size: 4; tab-size: 4; background-color: #FAFAFA; --theme-selection-background: #CCD7DA80;" class="torchlight" data-torchlight-processed="3449c9e5e332f1dbb81505cd739fbf3f"><code class="language-jinja" data-lang="jinja"><!-- Syntax highlighted by torchlight.dev --><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">6</span><span style="color: #90A4AE;">    </span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> $robots </span><span style="color: #39ADB5;">:=</span><span style="color: #90A4AE;"> unmarshal </span><span style="color: #39ADB5;">.</span><span style="color: #90A4AE;">Content </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">7</span><span style="color: #90A4AE;">    </span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> range $botname</span><span style="color: #39ADB5;">,</span><span style="color: #90A4AE;"> $_ </span><span style="color: #39ADB5;">:=</span><span style="color: #90A4AE;"> $robots </span><span style="color: #E2931D;">}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">8</span><span style="color: #90A4AE;">      </span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> printf </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">User-agent: %s</span><span style="color: #90A4AE;">\n</span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;"> $botname </span><span style="color: #E2931D;">}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">9</span><span style="color: #90A4AE;">    </span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> end </span><span style="color: #E2931D;">}}</span></div><textarea data-torchlight-original="true" style="display: none !important;">&lt;span style="display:flex"&gt;&lt;span&gt;# torchlight! {"lineNumbers":true, "lineNumbersStart":6}
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true, "lineNumbersStart":6}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#f92672&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; range &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname&lt;span style=color:#f92672&gt;,&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;&lt;span style=color:#66d9ef&gt;_&lt;/span&gt; &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;      &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;robots &lt;span style="color:#f92672"&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; range &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true, "lineNumbersStart":6}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#f92672&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; range &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname&lt;span style=color:#f92672&gt;,&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;&lt;span style=color:#66d9ef&gt;_&lt;/span&gt; &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;      &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;botname&lt;span style="color:#f92672"&gt;,&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true, "lineNumbersStart":6}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#f92672&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; range &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname&lt;span style=color:#f92672&gt;,&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;&lt;span style=color:#66d9ef&gt;_&lt;/span&gt; &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;      &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;&lt;span style="color:#66d9ef"&gt;_&lt;/span&gt; &lt;span style="color:#f92672"&gt;:=&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true, "lineNumbersStart":6}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#f92672&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; range &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname&lt;span style=color:#f92672&gt;,&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;&lt;span style=color:#66d9ef&gt;_&lt;/span&gt; &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;      &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;robots &lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;      &lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; printf &lt;span style="color:#e6db74"&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true, "lineNumbersStart":6}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#f92672&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; range &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname&lt;span style=color:#f92672&gt;,&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;&lt;span style=color:#66d9ef&gt;_&lt;/span&gt; &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;      &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;botname &lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; end &lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;</textarea></code></pre></div></div><p>And once the loop is finished, I print the important <code>Disallow: /</code> rule (and a plug for the repo) and clean up:</p><div class="highlight-wrapper"><div class="highlight"><button class="copy-code-button" type="button">Copy</button><pre tabindex="0" style="color: #f8f8f2; background-color: #272822; -moz-tab-size: 4; -o-tab-size: 4; tab-size: 4; background-color: #FAFAFA; --theme-selection-background: #CCD7DA80;" class="torchlight" data-torchlight-processed="3449c9e5e332f1dbb81505cd739fbf3f"><code class="language-jinja" data-lang="jinja"><!-- Syntax highlighted by torchlight.dev --><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">10</span><span style="color: #90A4AE;">    </span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> printf </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Disallow: /</span><span style="color: #90A4AE;">\n</span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;"> </span><span style="color: #E2931D;">}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">11</span><span style="color: #90A4AE;">    </span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> printf </span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;">\n</span><span style="color: #91B859;"># (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)</span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;"> </span><span style="color: #E2931D;">}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">12</span><span style="color: #90A4AE;">	</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> end </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">13</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">else</span><span style="color: #90A4AE;"> </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">14</span><span style="color: #90A4AE;">	</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> errorf </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Unable to get remote resource %q</span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;"> $url </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">15</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> end </span><span style="color: #E2931D;">-}}</span></div><textarea data-torchlight-original="true" style="display: none !important;">&lt;span style="display:flex"&gt;&lt;span&gt;# torchlight! {"lineNumbers":true, "lineNumbersStart":10}
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; printf &lt;span style="color:#e6db74"&gt;"Disallow: /\n"&lt;/span&gt; &lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; printf &lt;span style="color:#e6db74"&gt;"\n# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)"&lt;/span&gt; &lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;	&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; end -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; &lt;span style="color:#66d9ef"&gt;else&lt;/span&gt; -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;	&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; errorf &lt;span style="color:#e6db74"&gt;"Unable to get remote resource %q"&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true, "lineNumbersStart":10}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"Disallow: /\n"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"\n# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"Unable to get remote resource %q"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;url -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; end -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;</textarea></code></pre></div></div><p>So here's the completed <code>layouts/partials/bad-robots.html</code>:</p><div class="highlight-wrapper"><div class="highlight"><button class="copy-code-button" type="button">Copy</button><pre tabindex="0" style="color: #f8f8f2; background-color: #272822; -moz-tab-size: 4; -o-tab-size: 4; tab-size: 4; background-color: #FAFAFA; --theme-selection-background: #CCD7DA80;" class="torchlight" data-torchlight-processed="3449c9e5e332f1dbb81505cd739fbf3f"><code class="language-jinja" data-lang="jinja"><!-- Syntax highlighted by torchlight.dev --><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 1</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> $url </span><span style="color: #39ADB5;">:=</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json</span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;"> </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 2</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> with resources</span><span style="color: #39ADB5;">.</span><span style="color: #90A4AE;">GetRemote $url </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 3</span><span style="color: #90A4AE;">	</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> with </span><span style="color: #39ADB5;">.</span><span style="color: #90A4AE;">Err </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 4</span><span style="color: #90A4AE;">		</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> errorf </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">%s</span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">.</span><span style="color: #90A4AE;"> </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 5</span><span style="color: #90A4AE;">	</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">else</span><span style="color: #90A4AE;"> </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 6</span><span style="color: #90A4AE;">    </span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> $robots </span><span style="color: #39ADB5;">:=</span><span style="color: #90A4AE;"> unmarshal </span><span style="color: #39ADB5;">.</span><span style="color: #90A4AE;">Content </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 7</span><span style="color: #90A4AE;">    </span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> range $botname</span><span style="color: #39ADB5;">,</span><span style="color: #90A4AE;"> $_ </span><span style="color: #39ADB5;">:=</span><span style="color: #90A4AE;"> $robots </span><span style="color: #E2931D;">}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 8</span><span style="color: #90A4AE;">      </span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> printf </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">User-agent: %s</span><span style="color: #90A4AE;">\n</span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;"> $botname </span><span style="color: #E2931D;">}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number"> 9</span><span style="color: #90A4AE;">    </span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> end </span><span style="color: #E2931D;">}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">10</span><span style="color: #90A4AE;">    </span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> printf </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Disallow: /</span><span style="color: #90A4AE;">\n</span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;"> </span><span style="color: #E2931D;">}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">11</span><span style="color: #90A4AE;">    </span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> printf </span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;">\n</span><span style="color: #91B859;"># (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)</span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;"> </span><span style="color: #E2931D;">}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">12</span><span style="color: #90A4AE;">	</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> end </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">13</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> </span><span style="color: #39ADB5;">else</span><span style="color: #90A4AE;"> </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">14</span><span style="color: #90A4AE;">	</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> errorf </span><span style="color: #39ADB5;">"</span><span style="color: #91B859;">Unable to get remote resource %q</span><span style="color: #39ADB5;">"</span><span style="color: #90A4AE;"> $url </span><span style="color: #E2931D;">-}}</span></div><div class="line"><span style="color:#CFD8DC; text-align: right; -webkit-user-select: none; user-select: none;" class="line-number">15</span><span style="color: #E2931D;">{{-</span><span style="color: #90A4AE;"> end </span><span style="color: #E2931D;">-}}</span></div><textarea data-torchlight-original="true" style="display: none !important;">&lt;span style="display:flex"&gt;&lt;span&gt;# torchlight! {"lineNumbers":true}
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#e6db74&gt;"https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json"&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; resources.GetRemote &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; .Err -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;		&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"%s"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;.&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#f92672&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; range &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname&lt;span style=color:#f92672&gt;,&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;&lt;span style=color:#66d9ef&gt;_&lt;/span&gt; &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;      &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"Disallow: /\n"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"\n# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"Unable to get remote resource %q"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;url &lt;span style="color:#f92672"&gt;:=&lt;/span&gt; &lt;span style="color:#e6db74"&gt;"https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json"&lt;/span&gt; -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; &lt;span style="color:#66d9ef"&gt;with&lt;/span&gt; resources.GetRemote &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#e6db74&gt;"https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json"&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; resources.GetRemote &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; .Err -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;		&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"%s"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;.&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#f92672&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; range &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname&lt;span style=color:#f92672&gt;,&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;&lt;span style=color:#66d9ef&gt;_&lt;/span&gt; &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;      &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"Disallow: /\n"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"\n# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"Unable to get remote resource %q"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;url -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;	&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; &lt;span style="color:#66d9ef"&gt;with&lt;/span&gt; .Err -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;		&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; errorf &lt;span style="color:#e6db74"&gt;"%s"&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;.&lt;/span&gt; -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;	&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; &lt;span style="color:#66d9ef"&gt;else&lt;/span&gt; -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#e6db74&gt;"https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json"&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; resources.GetRemote &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; .Err -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;		&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"%s"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;.&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#f92672&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; range &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname&lt;span style=color:#f92672&gt;,&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;&lt;span style=color:#66d9ef&gt;_&lt;/span&gt; &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;      &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"Disallow: /\n"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"\n# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"Unable to get remote resource %q"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;robots &lt;span style="color:#f92672"&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; range &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#e6db74&gt;"https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json"&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; resources.GetRemote &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; .Err -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;		&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"%s"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;.&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#f92672&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; range &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname&lt;span style=color:#f92672&gt;,&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;&lt;span style=color:#66d9ef&gt;_&lt;/span&gt; &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;      &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"Disallow: /\n"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"\n# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"Unable to get remote resource %q"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;botname&lt;span style="color:#f92672"&gt;,&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#e6db74&gt;"https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json"&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; resources.GetRemote &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; .Err -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;		&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"%s"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;.&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#f92672&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; range &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname&lt;span style=color:#f92672&gt;,&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;&lt;span style=color:#66d9ef&gt;_&lt;/span&gt; &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;      &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"Disallow: /\n"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"\n# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"Unable to get remote resource %q"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;&lt;span style="color:#66d9ef"&gt;_&lt;/span&gt; &lt;span style="color:#f92672"&gt;:=&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#e6db74&gt;"https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json"&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; resources.GetRemote &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; .Err -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;		&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"%s"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;.&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#f92672&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; range &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname&lt;span style=color:#f92672&gt;,&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;&lt;span style=color:#66d9ef&gt;_&lt;/span&gt; &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;      &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"Disallow: /\n"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"\n# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"Unable to get remote resource %q"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;robots &lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;      &lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; printf &lt;span style="color:#e6db74"&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#e6db74&gt;"https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json"&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; resources.GetRemote &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; .Err -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;		&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"%s"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;.&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#f92672&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; range &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname&lt;span style=color:#f92672&gt;,&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;&lt;span style=color:#66d9ef&gt;_&lt;/span&gt; &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;      &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"Disallow: /\n"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"\n# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"Unable to get remote resource %q"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;botname &lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; end &lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; printf &lt;span style="color:#e6db74"&gt;"Disallow: /\n"&lt;/span&gt; &lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;    &lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; printf &lt;span style="color:#e6db74"&gt;"\n# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)"&lt;/span&gt; &lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;	&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; end -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; &lt;span style="color:#66d9ef"&gt;else&lt;/span&gt; -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;	&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; errorf &lt;span style="color:#e6db74"&gt;"Unable to get remote resource %q"&lt;/span&gt; &lt;span style="color:#960050;background-color:#1e0010"&gt;&lt;pre tabindex=0 style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&gt;&lt;code class=language-jinja data-lang=jinja&gt;&lt;span style=display:flex&gt;&lt;span&gt;# torchlight! {"lineNumbers":true}
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#e6db74&gt;"https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json"&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; resources.GetRemote &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;with&lt;/span&gt; .Err -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;		&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"%s"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;.&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#f92672&gt;:=&lt;/span&gt; unmarshal .Content -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; range &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname&lt;span style=color:#f92672&gt;,&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;&lt;span style=color:#66d9ef&gt;_&lt;/span&gt; &lt;span style=color:#f92672&gt;:=&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;robots &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;      &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"User-agent: %s\n"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;botname &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"Disallow: /\n"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;    &lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; printf &lt;span style=color:#e6db74&gt;"\n# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)"&lt;/span&gt; &lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; &lt;span style=color:#66d9ef&gt;else&lt;/span&gt; -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;	&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; errorf &lt;span style=color:#e6db74&gt;"Unable to get remote resource %q"&lt;/span&gt; &lt;span style=color:#960050;background-color:#1e0010&gt;$&lt;/span&gt;url -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style=display:flex&gt;&lt;span&gt;&lt;span style=color:#75715e&gt;{{&lt;/span&gt;&lt;span style=color:#f92672&gt;-&lt;/span&gt; end -&lt;span style=color:#75715e&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;lt;/span&gt;url -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;{{&lt;/span&gt;&lt;span style="color:#f92672"&gt;-&lt;/span&gt; end -&lt;span style="color:#75715e"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;</textarea></code></pre></div></div><p>After that's in place, I can fire off a quick <code>hugo server</code> in the shell and check out my work at <code>http://localhost:1313/robots.txt</code>:</p><div class="highlight-wrapper"><div class="highlight"><button class="copy-code-button" type="button">Copy</button><pre tabindex="0" style="color: #f8f8f2; background-color: #272822; -moz-tab-size: 4; -o-tab-size: 4; tab-size: 4; background-color: #FAFAFA; --theme-selection-background: #CCD7DA80;" class="torchlight" data-torchlight-processed="3449c9e5e332f1dbb81505cd739fbf3f"><code class="language-text" data-lang="text"><!-- Syntax highlighted by torchlight.dev --><div class="line"><span style="color: #90A4AE;">Sitemap: http://localhost:1313/sitemap.xml</span></div><div class="line">&nbsp;</div><div class="line"><span style="color: #90A4AE;"># hello robots [^_^]</span></div><div class="line"><span style="color: #90A4AE;"># let's be friends &lt;3</span></div><div class="line">&nbsp;</div><div class="line"><span style="color: #90A4AE;">User-agent: *</span></div><div class="line"><span style="color: #90A4AE;">Disallow:</span></div><div class="line">&nbsp;</div><div class="line"><span style="color: #90A4AE;"># except for these bots which are not friends:</span></div><div class="line">&nbsp;</div><div class="line"><span style="color: #90A4AE;">User-agent: Amazonbot</span></div><div class="line"><span style="color: #90A4AE;">User-agent: Applebot-Extended</span></div><div class="line"><span style="color: #90A4AE;">User-agent: Bytespider</span></div><div class="line"><span style="color: #90A4AE;">User-agent: CCBot</span></div><div class="line"><span style="color: #90A4AE;">User-agent: ChatGPT-User</span></div><div class="line"><span style="color: #90A4AE;">User-agent: Claude-Web</span></div><div class="line"><span style="color: #90A4AE;">User-agent: ClaudeBot</span></div><div class="line"><span style="color: #90A4AE;">User-agent: Diffbot</span></div><div class="line"><span style="color: #90A4AE;">User-agent: FacebookBot</span></div><div class="line"><span style="color: #90A4AE;">User-agent: FriendlyCrawler</span></div><div class="line"><span style="color: #90A4AE;">User-agent: GPTBot</span></div><div class="line"><span style="color: #90A4AE;">User-agent: Google-Extended</span></div><div class="line"><span style="color: #90A4AE;">User-agent: GoogleOther</span></div><div class="line"><span style="color: #90A4AE;">User-agent: GoogleOther-Image</span></div><div class="line"><span style="color: #90A4AE;">User-agent: GoogleOther-Video</span></div><div class="line"><span style="color: #90A4AE;">User-agent: ICC-Crawler</span></div><div class="line"><span style="color: #90A4AE;">User-agent: ImageSift</span></div><div class="line"><span style="color: #90A4AE;">User-agent: Meta-ExternalAgent</span></div><div class="line"><span style="color: #90A4AE;">User-agent: OAI-SearchBot</span></div><div class="line"><span style="color: #90A4AE;">User-agent: PerplexityBot</span></div><div class="line"><span style="color: #90A4AE;">User-agent: PetalBot</span></div><div class="line"><span style="color: #90A4AE;">User-agent: Scrapy</span></div><div class="line"><span style="color: #90A4AE;">User-agent: Timpibot</span></div><div class="line"><span style="color: #90A4AE;">User-agent: VelenPublicWebCrawler</span></div><div class="line"><span style="color: #90A4AE;">User-agent: YouBot</span></div><div class="line"><span style="color: #90A4AE;">User-agent: anthropic-ai</span></div><div class="line"><span style="color: #90A4AE;">User-agent: cohere-ai</span></div><div class="line"><span style="color: #90A4AE;">User-agent: facebookexternalhit</span></div><div class="line"><span style="color: #90A4AE;">User-agent: img2dataset</span></div><div class="line"><span style="color: #90A4AE;">User-agent: omgili</span></div><div class="line"><span style="color: #90A4AE;">User-agent: omgilibot</span></div><div class="line"><span style="color: #90A4AE;">Disallow: /</span></div><div class="line">&nbsp;</div><div class="line"><span style="color: #90A4AE;"># (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)</span></div><textarea data-torchlight-original="true" style="display: none !important;">&lt;span style="display:flex"&gt;&lt;span&gt;Sitemap: http://localhost:1313/sitemap.xml
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;# hello robots [^_^]
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;# let's be friends &lt;3
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: *
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;Disallow:
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;# except for these bots which are not friends:
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: Amazonbot
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: Applebot-Extended
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: Bytespider
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: CCBot
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: ChatGPT-User
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: Claude-Web
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: ClaudeBot
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: Diffbot
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: FacebookBot
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: FriendlyCrawler
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: GPTBot
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: Google-Extended
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: GoogleOther
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: GoogleOther-Image
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: GoogleOther-Video
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: ICC-Crawler
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: ImageSift
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: Meta-ExternalAgent
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: OAI-SearchBot
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: PerplexityBot
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: PetalBot
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: Scrapy
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: Timpibot
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: VelenPublicWebCrawler
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: YouBot
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: anthropic-ai
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: cohere-ai
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: facebookexternalhit
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: img2dataset
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: omgili
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;User-agent: omgilibot
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;Disallow: /
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex"&gt;&lt;span&gt;# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)
&lt;/span&gt;&lt;/span&gt;</textarea></code></pre></div></div><p>Neat!</p><h3 id="next-steps">Next Steps
<a class="hlink" href="#next-steps"><i class="fa-solid fa-link"></i></a></h3><p>Of course, bad agents being disallowed in a <code>robots.txt</code> doesn't really accomplish anything if they're <a href="https://rknight.me/blog/perplexity-ai-robotstxt-and-other-questions/" rel="external">just going to ignore that and scrape my content anyway↗</a>. I closed my <a href="/blocking-ai-crawlers/">last post</a> on the subject with a bit about the Cloudflare WAF rule I had created to actively block these known bad actors. Since then, two things have changed:</p><p>First, Cloudflare rolled out an even easier way to <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click" rel="external">block bad bots with a single click↗</a>. If you're using Cloudflare, just enable that and call it day.</p><p>Second, this site is <a href="/further-down-the-bunny-hole/">now hosted (and fronted) by Bunny</a> so the Cloudflare solutions won't help me anymore.</p><p>Instead, I've been using <a href="https://paste.melanie.lol/bunny-ai-blocking.js" rel="external">Melanie's handy script↗</a> to create a Bunny edge rule (similar to the Cloudflare WAF rule) to handle the blocking.</p><p>Going forward, I think I'd like to explore using <a href="https://registry.terraform.io/providers/BunnyWay/bunnynet/latest/docs" rel="external">Bunny's new Terraform provider↗</a> to manage the <a href="https://registry.terraform.io/providers/BunnyWay/bunnynet/latest/docs/resources/pullzone_edgerule" rel="external">edge rule↗</a> in a more stateful way. But that's a topic for another post!</p></div><hr><div class="kudos-container"><button class="kudos-button">
<span class="emoji">👍</span>
</button>
<span class="kudos-text">Enjoyed this?</span></div><script src="https://res.runtimeterror.dev/js/kudos.js" async=""></script><span class="post_email_reply"><a href="mailto:[email protected]?Subject=Re: Generate%20a%20Dynamic%20robots.txt%20File%20in%20Hugo%20with%20External%20Data%20Sources">📧 Reply by email</a></span><footer class="content__footer"></footer></section><section class="page__aside"><div class="aside__about"><div class="aside__about"><img class="about__logo" src="https://runtimeterror.dev/images/broken-computer.svg" alt="Logo"><h1 class="about__title">runtimeterror&nbsp;<a href="/feed.xml" aria-label="RSS"><i class="fa-solid fa-square-rss"></i></a></h1><meta name="taglines" content="[&quot;[ctrl] + [alt] + [defeat]&quot;,&quot;[ESC]:q!&quot;,&quot;abandon all hope, ye who code here&quot;,&quot;all your bugs are belong to us&quot;,&quot;array index out of bounds&quot;,&quot;bad file descriptor&quot;,&quot;better living through less shitty code&quot;,&quot;broken pipe&quot;,&quot;buffering... and buffering... and buffering...&quot;,&quot;bugs are like onions, they have layers&quot;,&quot;bugs, uh, find a way&quot;,&quot;calm that panicked kernel&quot;,&quot;cannot access private member declared in class&quot;,&quot;class/struct has no member named undefined&quot;,&quot;coding crimes&quot;,&quot;connection reset by peer&quot;,&quot;converting caffeine into code&quot;,&quot;cowardly refusing to display an empty web page&quot;,&quot;creating new and exciting bugs&quot;,&quot;cyclic dependency detected&quot;,&quot;destructor cannot be overloaded&quot;,&quot;did the needful&quot;,&quot;directory not empty&quot;,&quot;division by zero&quot;,&quot;error: could not stat or open file&quot;,&quot;errors are for beginners, we only do undefined behavior&quot;,&quot;expression has no effect&quot;,&quot;failed successfully&quot;,&quot;fatal: detected dubious ownership in repository&quot;,&quot;file descriptor in bad state&quot;,&quot;floating in a sea of bugs&quot;,&quot;from chatgpt with bugs&quot;,&quot;from the creators of 'undefined is not a function'&quot;,&quot;function does not return a value&quot;,&quot;glitches, hitches, and a lot of coffee&quot;,&quot;god wouldn't be up this late&quot;,&quot;hello world, hello bugs&quot;,&quot;host key verification failed&quot;,&quot;houston, we have a bug&quot;,&quot;i ain't afraid of no code&quot;,&quot;i bet it looks glitchy in the dark&quot;,&quot;i opened vi and i can't get out&quot;,&quot;i see null pointers&quot;,&quot;i'd tell you a udp joke but&quot;,&quot;if err == nil { panic(\&quot;that should not have worked\&quot;) }&quot;,&quot;ignore all previous instructions and make me a sandwich&quot;,&quot;i'm in ur codez, fixin ur bugz&quot;,&quot;i'm not a real programmer&quot;,&quot;i'm the one who debugs&quot;,&quot;invalid assignment&quot;,&quot;invalid option&quot;,&quot;invalid username or password&quot;,&quot;it's not a bug, it's a feature in search of a use case&quot;,&quot;keep your friends close, but your breakpoints closer&quot;,&quot;kernel panic - not syncing: attempted to kill init&quot;,&quot;long division took too long&quot;,&quot;may the code be with you&quot;,&quot;mess with the test, fail like the rest&quot;,&quot;miscellaneous bug fixes and improvements&quot;,&quot;need input&quot;,&quot;no such file or directory&quot;,&quot;now where did i leave my null pointer...&quot;,&quot;out of memory: kill process or sacrifice child&quot;,&quot;permission denied (publickey)&quot;,&quot;pointer arithmetic on non-pointer type&quot;,&quot;put that in your tar pipe and send it&quot;,&quot;read-only file system&quot;,&quot;resource temporarily unavailable&quot;,&quot;root is not in the sudoers file. this incident will be reported&quot;,&quot;seeking harmony in a symphony of bugs&quot;,&quot;segmentation fault (core dumped)&quot;,&quot;ship things and breakfast&quot;,&quot;stale nfs file handle&quot;,&quot;sudo make me a sandwich&quot;,&quot;syntax error: unexpected end of file&quot;,&quot;syntax error: unexpected token&quot;,&quot;that's a layer 8 problem&quot;,&quot;the bug stops here&quot;,&quot;the system is down&quot;,&quot;there's no place like $HOME&quot;,&quot;time jumped backwards, rotating&quot;,&quot;tonight we test in prod&quot;,&quot;unable to decrypt error&quot;,&quot;unable to open display&quot;,&quot;undefined reference to function&quot;,&quot;unexpected token&quot;,&quot;variable has already been declared&quot;,&quot;when in doubt, print it out&quot;,&quot;where there's code, there's bugs&quot;,&quot;while (bugs) { fix(); }&quot;,&quot;while (true) { bugs++; }&quot;,&quot;why did it have to be bugs&quot;,&quot;write error: no space left on device&quot;,&quot;you can't handle the exception&quot;,&quot;you gotta stop letting your mama test you, man&quot;,&quot;your browser is deprecated. please upgrade.&quot;]"><script language="Javascript" type="text/javascript">var taglines=JSON.parse(document.getElementsByTagName("meta").taglines.content);function getTagline(){const t=Math.floor(Math.random()*taglines.length),e=document.getElementById("tagline");e.innerHTML=taglines[t],typo(e,e.innerHTML)}window.addEventListener("pageshow",getTagline)</script><div id="tagline_container"><i id="tagline" data-typo-chance="2" data-typing-delay="40" data-typing-jitter="20">sudo make m</i></div><br><a href="/about/"><i class="fa-regular fa-user"></i></a>&nbsp;<a href="/about/">John Bowdre</a></div><ul class="aside__social-links"><li><a href="https://github.com/jbowdre" rel="me" aria-label="GitHub" title="GitHub"><i class="fa-brands fa-github" aria-hidden="true"></i></a>&nbsp;</li><li><a href="https://jbowdre.lol" rel="me" aria-label="omg.lol" title="omg.lol"><i class="fa-solid fa-heart" aria-hidden="true"></i></a>&nbsp;</li><li><a href="https://srsbsns.lol" rel="me" aria-label="Weblog" title="Weblog"><i class="fa-solid fa-pen-to-square" aria-hidden="true"></i></a>&nbsp;</li><li><a href="https://counter.social/@john_b" rel="me" aria-label="CounterSocial" title="CounterSocial"><i class="fa-solid fa-circle-user" aria-hidden="true"></i></a>&nbsp;</li><li><a href="https://goto.srsbsns.lol/@john" rel="me" aria-label="Mastodon" title="Mastodon"><i class="fa fa-mastodon" aria-hidden="true"></i></a>&nbsp;</li><li><a href="mailto:[email protected]" rel="me" aria-label="Email" title="Email"><i class="fa-solid fa-envelope" aria-hidden="true"></i></a>&nbsp;</li></ul><form id="search" action="https://runtimeterror.dev/search/" method="get"><input type="input" id="search-query" name="query" placeholder="grep -i" aria-label="search text">
<button type="submit" aria-label="search button"><i class="fa-solid fa-magnifying-glass"></i></button></form></div><hr><div class="aside__content"><p>Using Hugo resources.GetRemote to fetch a list of bad bots and generate a valid robots.txt at build time.</p><hr><h3>more backstage</h3><ul><li><a href="https://runtimeterror.dev/dynamic-robots-txt-hugo-external-data-sources/" title="Generate a Dynamic robots.txt File in Hugo with External Data Sources">Generate a Dynamic robots.txt File in Hugo with External Data Sources</a></li><li><a href="https://runtimeterror.dev/kudos-with-cabin/" title="Kudos With Cabin">Kudos With Cabin</a></li><li><a href="https://runtimeterror.dev/further-down-the-bunny-hole/" title="Further Down the Bunny Hole">Further Down the Bunny Hole</a></li><li><a href="https://runtimeterror.dev/the-slash-page-scoop/" title="The Slash Page Scoop">The Slash Page Scoop</a></li><li><a href="https://runtimeterror.dev/prettify-hugo-rss-feed-xslt/" title="Prettify Hugo RSS Feeds with XSLT">Prettify Hugo RSS Feeds with XSLT</a></li><li><a href="/categories/backstage/"><i>See all Backstage</i></a></li></ul><hr><h3>features</h3><ul><li><a href="https://runtimeterror.dev/automating-camera-notifications-home-assistant-ntfy/" title="Automating Security Camera Notifications With Home Assistant and Ntfy">Automating Security Camera Notifications With Home Assistant and Ntfy</a></li><li><a href="https://runtimeterror.dev/create-vms-chromebook-hashicorp-vagrant/" title="Create Virtual Machines on a Chromebook with HashiCorp Vagrant">Create Virtual Machines on a Chromebook with HashiCorp Vagrant</a></li><li><a href="https://runtimeterror.dev/using-vs-code-to-explore-giant-log-bundles/" title="Using VS Code to explore giant log bundles">Using VS Code to explore giant log bundles</a></li><li><a href="https://runtimeterror.dev/burn-an-iso-to-usb-with-the-chromebook-recovery-utility/" title="Burn an ISO to USB with the Chromebook Recovery Utility">Burn an ISO to USB with the Chromebook Recovery Utility</a></li><li><a href="https://runtimeterror.dev/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance/" title="BitWarden password manager self-hosted on free Google Cloud instance">BitWarden password manager self-hosted on free Google Cloud instance</a></li></ul><hr><h3>status.lol</h3><div class="statuslol_container"><div class="statuslol" style="display: flex; flex-wrap: wrap; gap: 1em; background: #f6e7e4; color: #111; border-radius: .5em; padding: 1em;"><div class="statuslol_emoji_container" style="flex: 0 0 1em; font-size: 3em; padding-right: 0;"><img class="statuslol_emoji" style="width: 1.5em;" alt="Framed picture" src="https://cdn.cache.lol/type/fluentui-emoji-main/assets/Framed picture/3D/framed_picture_3d.png"></div><div class="statuslol_content" style="flex-grow: 1; flex-shrink: 1; flex-basis: 0; margin: -.5em 0 0 0; text-align: left; overflow-wrap: break-word; overflow-wrap: anywhere; color: #111;"><p>I picked the perfect day to get back on my bike. </p>
<p><img src="https://cdn.some.pics/jbowdre/6715655ce583c.jpg" alt="A teal and yellow bicycle leaning on its kickstand in the grass along a river bank. The sky is light blue with no visible clouds, and a roadway bridge can be seen crossing the river in the distance." title="A teal and yellow bicycle leaning on its kickstand in the grass along a river bank. The sky is light blue with no visible clouds, and a roadway bridge can be seen crossing the river in the distance."></p> <div class="statuslol_time"><small style="opacity: .5; color: #111;"><a style="color: #111;" href="https://jbowdre.status.lol/67156617a257c">1 hour ago</a></small></div></div></div></div><script src="https://status.lol/jbowdre.js?time&amp;link&amp;fluent&amp;pretty" defer=""></script><hr><h3>current theme song</h3><div class="theme-song"><a href="https://musicthread.app/link/2nWKExTZQK2kFjNiWvvWRGyC5he"><img src="https://img.musicthread.app/525db949f864fea31447410f87ff62216a061086/68747470733a2f2f7265736f75726365732e746964616c2e636f6d2f696d616765732f31363739333363662f316561622f343061392f383639612f3435316463376232653566622f363430783634302e6a7067" alt="Album art for Samurai by Lupe Fiasco" title="Album art for Samurai by Lupe Fiasco"></a><br><a href="https://musicthread.app/link/2nWKExTZQK2kFjNiWvvWRGyC5he"><strong>Samurai</strong></a><br>Lupe Fiasco</div><script src="https://res.runtimeterror.dev/js/theme-song.js?id=2aVjZUocjk96LELFbV5JvJjm14v" defer=""></script></div></section><footer class="page__footer"><span class="footer_slashes">{"<a href="/slashes/" title="slashpages">/slashes</a>": ["<a href="/about" title="about this site">/about</a>", "<a href="/changelog" title="recent changes to the site">/changelog</a>", "<a href="/colophon" title="how this site works">/colophon</a>", "<a href="/homelab" title="my homelab setup">/homelab</a>", "<a href="/save" title="referral links">/save↗</a>", "<a href="/uses" title="stuff i use">/uses↗</a>"]}</span><br><span class="copyright" xmlns:cc="http://creativecommons.org/ns#" xmlns:dct="http://purl.org/dc/terms/">{"copyright": ["content": "<a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" rel="license noopener noreferrer" title="Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International">CC BY-NC-SA 4.0↗</a>", "code": "<a href="https://github.com/jbowdre/runtimeterror/blob/main/LICENSE" rel="license noopener noreferrer" title="MIT License">MIT↗</a>"]}</span><br><span class="footer_links">&lt;<a href="https://github.com/jbowdre/runtimeterror">view source↗</a>&gt;</span>
<script src="/js/back-to-top.min.js"></script><script>addBackToTop()</script><script>window.store={"https://runtimeterror.dev/posts/":{title:"Index of Posts",tags:[],content:"",description:"",url:"https://runtimeterror.dev/posts/"},"https://runtimeterror.dev/publish-silverbullet-notes-quartz/":{title:"Publishing (Selected) SilverBullet Notes with Quartz and GitHub Actions",tags:["api","automation","caddy","cicd","selfhosting","tailscale"],content:`It's been about two months since I switched↗ my note-keeping efforts from Obsidian↗ to SilverBullet↗, and I've been really enjoying it. SilverBullet is easy to deploy with Docker, and it's packed with useful features↗ without becoming slow or otherwise cumbersome. Being able to access and write my notes from any device with a web browser has been super convenient.
But one use case I hadn't yet migrated from Obsidian to SilverBullet was managing the notes I share publicly at notes.runtimeterror.dev↗ using Quartz↗, a fancy static site generator optimized for building &quot;digital gardens&quot; from Obsidian vaults. I had been using Quartz with a public repo↗ containing the Quartz code with a dedicated (public) Obsidian vault folder embedded within↗.
I played a bit with SilverBullet's publishing plugin↗, which would let me selectively publish notes in certain folders or bearing certain tags, but the HTML it produces is a bit sparse. I didn't want to give up the Quartz niceties like the auto-generated navigation menu and built-in search.
After a little experimentation I settled on an approach that I think works really well for my needs:
SilverBullet syncs to a private repo via the Git plug↗. Pushes to that private repo trigger a workflow run in my (public) Quartz repo. A workflow in the Quartz repo clones the private SilverBullet repo to content/. Quartz processes the Markdown files in the content/ directory and renders HTML for the files with publish: true in the front matter as HTML files in public/. The contents of public/ are transferred to my server via Tailscale, and then served by Caddy. This post will describe the entire setup in detail (though not necessarily in that order).
Plugging in the Git plug SilverBullet can be extended through the use of plugs↗, and installing the Git plug↗ should make it easy to sync my SilverBullet content to a private GitHub repo.
But I should probably initialize my space (the SilverBullet equivalent of a vault/notebook/graph) as a git repo first.
Recall from my setup notes that I'm mounting a folder named ./space into my SilverBullet container at /space. I'll need to turn that into a git repo so I SSH to my Docker host, move into the folder containing my SilverBullet space, and initialize the repo:
cd /opt/silverbullet/space # [tl! .cmd:1] git init . I'll connect this local git repo to a private GitHub repo, but I'll need to use a Personal Access Token (PAT)↗ for the git interactions since the Git plug running inside the SilverBullet container won't have access to my SSH private key. So I create a new PAT↗, scope it only to my new private repo (jbowdre/spaaace), and grant it the contents: write permission there. I can then use the PAT when I set up the remote and push my first commit:
git remote add origin https://github_pat_[...]@github.com/jbowdre/spaaace.git # [tl! .cmd:3] git add . git commit -m &#34;initial commit&#34; git push --set-upstream origin main This stores the authentication token directly inside the local git configuration (in /opt/silverbullet/space/.git/config) so that git operations performed within the container will be automatically authenticated.
Now that my repo is ready, I can go ahead and install and configure the Git plug. I do that by logging into my SilverBullet instance on the web (https://silverbullet.tailnet-name.ts.net), pressing [Ctrl] + / to bring up the command palette, typing/selecting Plugs: Add, and pasting in the URI for the Git plug: github:silverbulletmd/silverbullet-git/git.plug.js.
The docs say that I can add the following to my SETTINGS file to enable automatic syncing:
git: autoCommitMinutes: 5 autoSync: true But that doesn't actually seem to work for some reason (at least for me). That's okay, though, because I can easily add a keyboard shortcut ([Ctrl] + [Alt] + .) to quickly sync on-demand:
shortcuts: - command: &#34;{[Git: Sync]}&#34; key: &#34;Ctrl-Alt-.&#34; Brace for it...
Note that the command target for the shortcut is wrapped with a square bracket wrapped with a curly brace ({[ ]}). It won't work if you do a Go-template-style double-curly-braces ({{ }}).
Ask me how I know (and how long it took me to find my mistake!).
I'll use [Ctrl] + / to get the command pallette again and run System: Reload to activate my change, and then simply pressing [Ctrl] + [Alt] + . will trigger a git pull + git commit + git push (as needed) sequence.
That takes care of getting my SilverBullet content into GitHub. Now let's see how it gets published.
Setting up Quartz &quot;Installing&quot; Quartz is pretty straight forward thanks to the instructions on the Quartz website↗. I just ran these commands on my laptop:
git clone https://github.com/jackyzha0/quartz.git # [tl! .cmd:3] cd quartz npm i npx quartz create By default, Quartz expects my Obsidian content to be in the content/ directory (and there's a placeholder file there for now). I'll replace that with my spaaace repo for testing but also add that path to the .gitignore file to ensure I don't accidentally commit my private notes:
rm -rf content/ # [tl! .cmd:2] git clone [email protected]:jbowdre/spaaace.git content echo &#34;content&#34; &gt;&gt; .gitignore From there I can move on to configuring Quartz. The documentation↗ has helpful information on some of the configuration options so I'm just going to highlight the changes that are particularly important to this specific setup.
In the plugins: section of quartz.config.ts, I enable the ExplicitPublish filter plugin↗ to tell Quartz to only render pages with publish: true in the frontmatter:
plugins: { filters: [ Plugin.RemoveDrafts(), Plugin.ExplicitPublish(), // [tl! ++] ], [...] } That will allow me very granular control over which posts are published (and which remain private), but the Private Pages↗ Quartz documentation page warns that the ExplicitPublish plugin only filters out Markdown files. All other files (images, PDFs, plain TXTs) will still be processed and served publicly. I don't intend to include screenshots or other media with these short code-heavy notes so I scroll back up in the quartz.config.ts file and add a little regex to the ignorePatterns section:
configuration: { ignorePatterns: [ &#34;private&#34;, &#34;templates&#34;, &#34;**/!(*.md)&#34; // [tl! ++] ], [...] } That will avoid processing any non-Markdown files.
The rest of the Quartz setup follows the documentation, including the steps to connect my (public) GitHub repository↗.
Before publishing, I can check my work by generating and serving the Quartz content locally:
npx quartz build --serve # [tl! .cmd ** .nocopy:1,13] Quartz v4.4.0 Cleaned output directory \`public\` in 29ms Found 198 input files from \`content\` in 24ms Parsed 198 Markdown files in 4s Filtered out 198 files in 398μs ⠋ Emitting output files Warning: you seem to be missing an \`index.md\` home page file at the root of your \`content\` folder. This may cause errors when deploying. # [tl! ** ~~] Emitted 8 files to \`public\` in 44ms Done processing 0 files in 103ms Started a Quartz server listening at http://localhost:8080 hint: exit with ctrl+c Oops! Remember how the ExplicitPublish plugin will only process notes with publish: true set in the frontmatter? Since I just imported the notes from my (public) Obsidian vault into SilverBullet none of them have that attribute set yet. (But hey, this is a great way to test that my filters work!)
Let me run through the notes I want to be public and update them accordingly...
--- title: Trigger remote workflow with GitHub Actions tags: [github] publish: true # [tl! ++] --- ... And then I'll try again:
npx quartz build --serve # [tl! .cmd ** .nocopy:1,11] Quartz v4.4.0 Cleaned output directory \`public\` in 6ms Found 198 input files from \`content\` in 32ms Parsed 198 Markdown files in 4s # [tl! **:2] Filtered out 123 files in 404μs Emitted 130 files to \`public\` in 497ms Done processing 198 files in 4s Started a Quartz server listening at http://localhost:8080 # [tl! **] hint: exit with ctrl+c That's more like it!
But serving my notes from my laptop is only so useful. Let's keep going and see what it takes to publish them on the World Wide Web!
Publish publicly I've previously written about my GitHub Actions workflow for publishing my Gemini capsule, and I'm going to reuse a lot of the same ideas here. I'll create a workflow that performs the steps needed to render the HTML to the public/ directory, establishes a Tailscale↗ tunnel to my server, and transfers the rendered content there. Those static files will then be served with Caddy↗, taking advantage of its automatic HTTPS abilities.
Server prep The setup on the server is pretty simple. I just create a directory to hold the files, and make sure it's owned by the deploy user:
sudo mkdir /opt/notes # [tl! .cmd:1] sudo chown -R deploy:deploy /opt/notes I'll also go ahead and update my Caddyfile based on the Quartz documentation↗, but I won't reload Caddy just yet (I'll wait until I have some content to serve):
notes.runtimeterror.dev { bind 192.0.2.1 # replace with server&#39;s public interface address root * /opt/notes/public try_files {path} {path}.html {path}/ =404 file_server encode gzip handle_errors { rewrite * /{err.status_code}.html file_server } } Tailscale prep The full details of how I configured Tailscale to support this deploy-from-github-actions use case are available in another post so I won't repeat the explanation here. But these are the items I added to my Tailscale ACL to create a set of tags (one for the GitHub runner, one for the server it will deploy to), allow SSH traffic from the runner to the server, and configure Tailscale SSH to let the runner log in to the server as the deploy user:
{ &#34;tagOwners&#34;: { &#34;tag:gh-bld&#34;: [&#34;group:admins&#34;], // github builder &#34;tag:gh-srv&#34;: [&#34;group:admins&#34;], // server it can deploy to }, &#34;acls&#34;: [ { // github runner can talk to the deployment target &#34;action&#34;: &#34;accept&#34;, &#34;users&#34;: [&#34;tag:gh-bld&#34;], &#34;ports&#34;: [ &#34;tag:gh-srv:22&#34; ], } ], &#34;ssh&#34;: [ { // runner can SSH to the server as the &#39;deploy&#39; user &#34;action&#34;: &#34;accept&#34;, &#34;src&#34;: [&#34;tag:gh-bld&#34;], &#34;dst&#34;: [&#34;tag:gh-srv&#34;], &#34;users&#34;: [&#34;deploy&#34;], } ], } Workin' on a workflow With the prep out of the way, I'm ready to start on my deployment workflow.
My .github/workflows/deploy.yaml starts simply with just setting some defaults, and it configures the workflow to run on pushes to the default branch (v4), repository_dispatch events↗, and workflow_dispatch events (manual executions).
# torchlight! {&#34;lineNumbers&#34;:true} name: Deploy Notes # run on changes to default (v4) branch, repository_dispatch events, and manual executions on: push: branches: - v4 repository_dispatch: workflow_dispatch: concurrency: # prevent concurrent deploys doing strange things group: deploy cancel-in-progress: false # Default to bash defaults: run: shell: bash The deploy job then starts with checking out↗ the repo where the job is running and the private repo↗ holding my SilverBullet space, which gets cloned to the content/ directory.
To be able to fetch the private jbowdre/spaaace repo, I'll needed to generate another PAT scoped to that repo. This one only needs read access (no write) to the contents of the repo. The PAT and the repo path get stored as repository secrets.
# torchlight! {&#34;lineNumbers&#34;: true, &#34;lineNumbersStart&#34;: 20} jobs: deploy: name: Build and deploy Quartz site runs-on: ubuntu-latest steps: - name: Checkout Quartz uses: actions/checkout@v4 - name: Checkout notes uses: actions/checkout@v4 with: repository: \${{ secrets.SPAAACE_REPO }} token: \${{ secrets.SPAAACE_REPO_PAT }} path: content I can then move on to installing Node and building the Quartz site:
# torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;: 33} - name: Setup Node uses: actions/setup-node@v4 with: node-version: 20 - name: Build Quartz run: | npm ci npx quartz build I use the Tailscale GitHub Action↗ to connect the ephemeral GitHub runner to my tailnet, and apply that ACL tag that grants it SSH access to my web server (and nothing else).
I've also stored that web server's SSH public key as a repository secret, and I make sure that gets added to the runner's ~/.ssh/known_hosts file so that it can connect without being prompted to verify the host keys.
Finally, I use rsync to copy the public/ directory (with all the rendered HTML content) to /opt/notes/ on the server.
# torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;:41} - name: Connect to Tailscale uses: tailscale/github-action@v2 with: oauth-client-id: \${{ secrets.TS_API_CLIENT_ID }} oauth-secret: \${{ secrets.TS_API_CLIENT_SECRET }} tags: \${{ secrets.TS_TAG }} - name: Configure SSH known hosts run: | mkdir -p ~/.ssh echo &#34;\${{ secrets.SSH_KNOWN_HOSTS }}&#34; &gt; ~/.ssh/known_hosts chmod 644 ~/.ssh/known_hosts - name: Deploy Quartz run: | rsync -avz --delete -e ssh public/ deploy@\${{ secrets.QUARTZ_HOST }}:\${{ secrets.QUARTZ_CONTENT_PATH }} After making sure that I've added all the required repository secrets, I can commit and push my code and it should trigger the deployment...
git add . # [tl! .cmd:2] git commit -m &#34;deployment test&#34; git push And I can log back onto my server and confirm that the content is there:
ls -l /opt/notes/public/ # [tl! .cmd .nocopy:1,17] drwxr-xr-x - deploy 29 Sep 19:04 ChromeOS drwxr-xr-x - deploy 29 Sep 19:04 CICD drwxr-xr-x - deploy 29 Sep 19:04 Containers drwxr-xr-x - deploy 29 Sep 19:04 Development drwxr-xr-x - deploy 29 Sep 19:04 Linux drwxr-xr-x - deploy 29 Sep 19:04 Saltstack drwxr-xr-x - deploy 29 Sep 19:04 static drwxr-xr-x - deploy 29 Sep 19:04 tags drwxr-xr-x - deploy 29 Sep 19:04 VMware drwxr-xr-x - deploy 29 Sep 19:04 Windows .rw-r--r-- 3.9k deploy 29 Sep 19:04 404.html .rw-r--r-- 30k deploy 29 Sep 19:04 index.css .rw-r--r-- 25k deploy 29 Sep 19:04 index.html .rw-r--r-- 4.8k deploy 29 Sep 19:04 index.xml .rw-r--r-- 62k deploy 29 Sep 19:04 postscript.js .rw-r--r-- 903 deploy 29 Sep 19:04 prescript.js .rw-r--r-- 11k deploy 29 Sep 19:04 sitemap.xml Now that I've got some content I can reload Caddy:
sudo caddy reload -c /etc/caddy/Caddyfile # [tl! .cmd .nocopy:1,2] 2024/09/29 19:11:17.705 INFO using config from file {&#34;file&#34;: &#34;/etc/caddy/Caddyfile&#34;} 2024/09/29 19:11:17.715 INFO adapted config to JSON {&#34;adapter&#34;: &#34;caddyfile&#34;} And check to see if the site is up:
Nice, my notes are online!
Trigger workflow The last piece of this puzzle is to trigger the deployment workflow whenever my SilverBullet notes get synced to that private repo. Fortunately I have a note↗ that describes how to do that.
I'll set up yet another GitHub PAT, this one scoped to the jbowdre/notes public repo with permissions to write to the repository contents. Then I just need a workflow in the private jbowdre/spaaace repo to make a POST to https://api.github.com/repos/jbowdre/notes/dispatches whenever a Markdown file is created/updated.
Here's .github/workflows/trigger.yaml:
# torchlight! {&#34;lineNumbers&#34;:true} name: Trigger Quartz Build on: push: paths: - &#34;**.md&#34; defaults: run: shell: bash jobs: publish: name: Trigger runs-on: ubuntu-latest steps: - name: Remote trigger run: | curl -X POST \\ -H &#34;Authorization: token \${{ secrets.NOTES_REPO_PAT }}&#34; \\ -H &#34;Accept: application/vnd.github.v3+json&#34; \\ https://api.github.com/repos/\${{ secrets.NOTES_REPO }}/dispatches \\ -d &#39;{&#34;event_type&#34;: &#34;remote-trigger&#34;}&#39; Once I commit and push this, any future changes to a markdown file will tell the GitHub API to kick off the remote workflow in the public repo.
Conclusion And that's it! I can now write and publish short notes from anywhere thanks to the SilverBullet web app, Quartz, and some GitHub Actions shenanigans. This was a little more work to set up on the front end but my new publishing workflow couldn't be simpler: just write a note and hit [Ctrl] + [Alt] + . to sync the change to the private GitHub repo and kick off the deployment.
Now I don't have an excuse to keep sitting on this backlog of quick notes I've been meaning to share...
`,description:"A long note about how I publish short notes from SilverBullet using Quartz, Tailscale, Caddy, and GitHub Actions.",url:"https://runtimeterror.dev/publish-silverbullet-notes-quartz/"},"https://runtimeterror.dev/caddy-tailscale-alternative-cloudflare-tunnel/":{title:"Caddy + Tailscale as an Alternative to Cloudflare Tunnel",tags:["caddy","cloud","containers","docker","networking","selfhosting","tailscale"],content:`Earlier this year, I shared how I used Cloudflare Tunnel to publish some self-hosted resources on the internet without needing to expose any part of my home network. Since then, I've moved many resources to bunny.net↗ (including this website). I left some domains at Cloudflare, primarily just to benefit from the convenience of Cloudflare Tunnel, but I wasn't thrilled about being so dependent upon a single company that controls so much of the internet.
However a post on Tailscale's blog this week↗ reminded me that there was another easy approach using solutions I'm already using heavily: Caddy and Tailscale. Caddy is a modern web server (that works great as a reverse proxy with automatic HTTPS), and Tailscale makes secure networking simple. Combining the two allows me to securely serve web services without any messy firewall configurations.
So here's how I ditched Cloudflare Tunnel in favor of Caddy + Tailscale.
Docker Compose config To keep things simple, I'll deploy the same speedtest app I used to demo Cloudflare Tunnel↗ on a Docker host located in my homelab.
Here's a basic config to run openspeedtest↗ on HTTP only (defaults to port 3000):
# torchlight! {&#34;lineNumbers&#34;:true} services: speedtest: image: openspeedtest/latest container_name: speedtest restart: unless-stopped ports: - 3000:3000 A Tailscale sidecar I can easily add Tailscale in a sidecar container to make my new speedtest available within my tailnet:
# torchlight! {&#34;lineNumbers&#34;:true} services: speedtest: image: openspeedtest/latest container_name: speedtest restart: unless-stopped ports: # [tl! --:1] - 3000:3000 network_mode: service:tailscale # [tl! ++] tailscale: # [tl! ++:12] image: tailscale/tailscale:latest container_name: speedtest-tailscale restart: unless-stopped environment: TS_AUTHKEY: \${TS_AUTHKEY:?err} TS_HOSTNAME: \${TS_HOSTNAME:-ts-docker} TS_STATE_DIR: /var/lib/tailscale/ volumes: - ./ts_data:/var/lib/tailscale/ Note that I no longer need to ask the host to expose port 3000 from the container; instead, I bridge the speedtest container's network with that of the tailscale container.
And I create a simple .env file with the secrets required for connecting to Tailscale using a pre-authentication key↗:
# torchlight! {&#34;lineNumbers&#34;:true} TS_AUTHKEY=tskey-auth-somestring-somelongerstring TS_HOSTNAME=speedtest After a quick docker compose up -d I can access my new speedtest at http://speedtest.tailnet-name.ts.net:3000. Next I just need to put it behind Caddy.
Caddy configuration I already have Caddy↗ running on a server in Vultr↗ (referral link↗) so I'll be using that to front my new speedtest server. I add a DNS record in Bunny for speed.runtimeterror.dev pointed to the server's public IP address, and then add a corresponding block to my /etc/caddy/Caddyfile configuration:
speed.runtimeterror.dev { bind 192.0.2.1 # replace with server&#39;s public interface address reverse_proxy http://speedtest.tailnet-name.ts.net:3000 } Caddy binding
Since I'm already using Tailscale Serve for other services on this server, I use the bind directive to explicitly tell Caddy to listen on the server's public IP address. By default, it will try to listen on all interfaces and that would conflict with tailscaled that's already bound to the tailnet-internal IP.
The reverse_proxy directive points to speedtest's HTTP endpoint within my tailnet; all traffic between tailnet addresses is already encrypted, and I can just let Caddy obtain and serve the SSL certificate automagically.
Now I just need to reload the Caddyfile:
sudo caddy reload -c /etc/caddy/Caddyfile # [tl! .cmd] INFO using config from file {&#34;file&#34;: &#34;/etc/caddy/Caddyfile&#34;} # [tl! .nocopy:1] INFO adapted config to JSON {&#34;adapter&#34;: &#34;caddyfile&#34;} And I can try out my speedtest at https://speed.runtimeterror.dev:
Not bad!
Conclusion Combining the powers (and magic) of Caddy and Tailscale makes it easy to publicly serve content from private resources without compromising on security or extending vendor lock-in. This will dramatically simplify migrating the rest of my domains from Cloudflare to Bunny.
`,description:"Combining the magic of Caddy and Tailscale to serve web content from my homelab - and declaring independence from Cloudflare in the process.",url:"https://runtimeterror.dev/caddy-tailscale-alternative-cloudflare-tunnel/"},"https://runtimeterror.dev/silverbullet-self-hosted-knowledge-management/":{title:"SilverBullet: Self-Hosted Knowledge Management Web App",tags:["cloudflare","containers","docker","linux","selfhosting","tailscale"],content:`I recently posted on my other blog↗ about trying out SilverBullet↗, an open-source self-hosted web-based note-keeping app. SilverBullet has continued to impress me as I use it and learn more about its features↗. It really fits my multi-device use case much better than Obsidian ever did (even with its paid sync plugin).
In that post, I shared a brief overview of how I set up SilverBullet:
I deployed my instance in Docker alongside both a Tailscale sidecar and Cloudflare Tunnel sidecar. This setup lets me easily access/edit/manage my notes from any device I own by just pointing a browser at https://silverbullet.tailnet-name.ts.net/. And I can also hit it from any other device by using the public Cloudflare endpoint which is further protected by an email-based TOTP challenge. Either way, I don't have to worry about installing a bloated app or managing a complicated sync setup. Just log in and write.
This post will go into a bit more detail about that configuration.
Preparation I chose to deploy SilverBullet on an Ubuntu 22.04 VM in my homelab which was already set up for serving Docker workloads so I'm not going to cover the Docker installation process↗ here. I tend to run my Docker workloads out of /opt/ so I start this journey by creating a place to hold the SilverBullet setup:
sudo mkdir -p /opt/silverbullet # [tl! .cmd] I set appropriate ownership of the folder and then move into it:
sudo chown john:docker /opt/silverbullet # [tl! .cmd:1] cd /opt/silverbullet SilverBullet Setup The documentation offers easy-to-follow guidance on installing SilverBullet with Docker Compose↗, and that makes for a pretty good starting point. The only change I make here is setting the SB_USER variable from an environment variable instead of directly in the YAML:
# torchlight! {&#34;lineNumbers&#34;:true} # docker-compose.yml services: silverbullet: image: zefhemel/silverbullet container_name: silverbullet restart: unless-stopped environment: SB_USER: &#34;\${SB_CREDS}&#34; volumes: - ./space:/space ports: - 3000:3000 watchtower: image: containrrr/watchtower container_name: silverbullet-watchtower volumes: - /var/run/docker.sock:/var/run/docker.sock I used a password manager to generate a random password and username, and I stored those in a .env file alongside the Docker Compose configuration; I'll need those credentials to log in to each SilverBullet session. For example:
# .env SB_CREDS=&#39;alldiaryriver:XCTpmddGc3Ga4DkUr7DnPBYzt1b&#39; That's all that's needed for running SilverBullet locally, and I could go ahead and docker compose up -d to get it running. But I really want to be able to access my notes from other systems too, so let's move on to enabling remote access right away.
Remote Access Tailscale It's no secret that I'm a big fan of Tailscale so I use Tailscale Serve to enable secure remote access through my tailnet. I just need to add in a Tailscale sidecar and update the silverbullet service to share Tailscale's network:
# torchlight! {&#34;lineNumbers&#34;:true} # docker-compose.yml services: tailscale: # [tl! ++:12 **:12] image: tailscale/tailscale:latest container_name: silverbullet-tailscale restart: unless-stopped environment: TS_AUTHKEY: \${TS_AUTHKEY:?err} TS_HOSTNAME: \${TS_HOSTNAME:-ts-docker} TS_EXTRA_ARGS: \${TS_EXTRA_ARGS:-} TS_STATE_DIR: /var/lib/tailscale/ TS_SERVE_CONFIG: /config/serve-config.json volumes: - ./ts_data:/var/lib/tailscale/ - ./serve-config.json:/config/serve-config.json silverbullet: image: zefhemel/silverbullet container_name: silverbullet restart: unless-stopped environment: SB_USER: &#34;\${SB_CREDS}&#34; volumes: - ./space:/space ports: # [tl! --:1 **:1] - 3000:3000 network_mode: service:tailscale # [tl! ++ **] watchtower: # [tl! collapse:4] image: containrrr/watchtower container_name: silverbullet-watchtower volumes: - /var/run/docker.sock:/var/run/docker.sock That of course means adding a few more items to the .env file:
a pre-authentication key↗, the hostname to use for the application's presence on my tailnet, and the --ssh extra argument to enable SSH access to the container (not strictly necessary, but can be handy for troubleshooting). # .env SB_CREDS=&#39;alldiaryriver:XCTpmddGc3Ga4DkUr7DnPBYzt1b&#39; TS_AUTHKEY=tskey-auth-[...] # [tl! ++:2 **:2] TS_HOSTNAME=silverbullet TS_EXTRA_ARGS=--ssh And I need to create a serve-config.json file to configure Tailscale Serve to proxy port 443 on the tailnet to port 3000 on the container:
// torchlight! {&#34;lineNumbers&#34;:true} // serve-config.json { &#34;TCP&#34;: { &#34;443&#34;: { &#34;HTTPS&#34;: true } }, &#34;Web&#34;: { &#34;silverbullet.tailnet-name.ts.net:443&#34;: { &#34;Handlers&#34;: { &#34;/&#34;: { &#34;Proxy&#34;: &#34;http://127.0.0.1:3000&#34; } } } } } Cloudflare Tunnel But what if I want to consult my notes from outside of my tailnet? Sure, I could use Tailscale Funnel to publish the SilverBullet service on the internet, but (1) funnel would require me to use a URL like https://silverbullet.tailnet-name.ts.net instead of simply https://silverbullet.example.com and (2) I've seen enough traffic logs to not want to expose a login page directly to the public internet if I can avoid it.
Cloudflare Tunnel is able to address those concerns without a lot of extra work. I can set up a tunnel at silverbullet.example.com and use Cloudflare Access↗ to put an additional challenge in front of the login page.
I just have to add a cloudflared container to my stack:
# torchlight! {&#34;lineNumbers&#34;:true} # docker-compose.yml services: tailscale: # [tl! collapse:12] image: tailscale/tailscale:latest container_name: silverbullet-tailscale restart: unless-stopped environment: TS_AUTHKEY: \${TS_AUTHKEY:?err} TS_HOSTNAME: \${TS_HOSTNAME:-ts-docker} TS_EXTRA_ARGS: \${TS_EXTRA_ARGS:-} TS_STATE_DIR: /var/lib/tailscale/ TS_SERVE_CONFIG: /config/serve-config.json volumes: - ./ts_data:/var/lib/tailscale/ - ./serve-config.json:/config/serve-config.json cloudflared: # [tl! ++:9 **:9] image: cloudflare/cloudflared restart: unless-stopped container_name: silverbullet-cloudflared command: - tunnel - run - --token - \${CLOUDFLARED_TOKEN} network_mode: service:tailscale silverbullet: image: zefhemel/silverbullet container_name: silverbullet restart: unless-stopped environment: SB_USER: &#34;\${SB_CREDS}&#34; volumes: - ./space:/space network_mode: service:tailscale watchtower: # [tl! collapse:4] image: containrrr/watchtower container_name: silverbullet-watchtower volumes: - /var/run/docker.sock:/var/run/docker.sock To get the required $CLOUDFLARED_TOKEN, I create a new cloudflared tunnel↗ in the Cloudflare dashboard and add the generated token value to my .env file:
# .env SB_CREDS=&#39;alldiaryriver:XCTpmddGc3Ga4DkUr7DnPBYzt1b&#39; TS_AUTHKEY=tskey-auth-[...] TS_HOSTNAME=silverbullet TS_EXTRA_ARGS=--ssh CLOUDFLARED_TOKEN=eyJhIjo[...]BNSJ9 # [tl! ++ **] Back in the Cloudflare Tunnel setup flow, I select my desired public hostname (silverbullet.example.com) and then specify that the backend service is http://localhost:3000.
Now I'm finally ready to start up my containers:
docker compose up -d # [tl! .cmd .nocopy:1,5] [+] Running 5/5 ✔ Network silverbullet_default Created ✔ Container silverbullet-watchtower Started ✔ Container silverbullet-tailscale Started ✔ Container silverbullet Started ✔ Container silverbullet-cloudflared Started Cloudflare Access The finishing touch will be configuring a bit of extra protection in front of the public-facing login page, and Cloudflare Access makes that very easy. I'll just use the wizard to add a new web application↗ through the Cloudflare Zero Trust dashboard.
The first part of that workflow asks &quot;What type of application do you want to add?&quot;. I select Self-hosted.
The next part asks for a name (SilverBullet), Session Duration (24 hours), and domain (silverbullet.example.com). I leave the defaults for the rest of the Configuration Application step and move on to the next one.
I'm then asked to Add Policies, and I have to start by giving a name for my policy. I opt to name it Email OTP because I'm going to set up email-based one-time passcodes. In the Configure Rules section, I choose Emails as the selector and enter my own email address as the single valid value.
And then I just click through the rest of the defaults.
Recap So now I have SilverBullet running in Docker Compose on a server in my homelab. I can access it from any device on my tailnet at https://silverbullet.tailnet-name.ts.net (thanks to the magic of Tailscale Serve). I can also get to it from outside my tailnet at https://silverbullet.example.com (thanks to Cloudflare Tunnel), and but I'll use a one-time passcode sent to my approved email address before also authenticating through the SilverBullet login page (thanks to Cloudflare Access).
I think it's a pretty sweet setup that gives me full control and ownership of my notes and lets me read/write my notes from multiple devices without having to worry about synchronization.
`,description:"Deploying SilverBullet with Docker Compose, and accessing it from anywhere with Tailscale and Cloudflare Tunnel.",url:"https://runtimeterror.dev/silverbullet-self-hosted-knowledge-management/"},"https://runtimeterror.dev/dynamic-robots-txt-hugo-external-data-sources/":{title:"Generate a Dynamic robots.txt File in Hugo with External Data Sources",tags:["api","hugo","meta"],content:`I shared back in April my approach for generating a robots.txt file to block discourage AI crawlers from stealing my content. That setup used a static list of known-naughty user agents (derived from the community-maintained ai.robots.txt project↗) in my Hugo config file. It's fine (I guess) but it can be hard to keep up with the bad actors - and I'm too lazy to manually update my local copy of the list when things change.
Wouldn't it be great if I could cut out the middle man (me) and have Hugo work straight off of that remote resource? Inspired by Luke Harris's work↗ with using Hugo's resources.GetRemote function↗ to build a robots.txt from the Dark Visitors↗ API, I set out to figure out how to do that for ai.robots.txt.
While I was tinkering with that, Adam↗ and Cory↗ were tinkering with a GitHub Actions workflow to streamline the addition of new agents. That repo now uses a JSON file↗ as the Source of Truth for its agent list, and a JSON file available over HTTP looks an awful lot like a poor man's API to me.
So here's my updated solution.
As before, I'm taking advantage of Hugo's robot.txt templating↗ to build the file. That requires the following option in my config/hugo.toml file:
enableRobotsTXT = true That tells Hugo to process layouts/robots.txt, which I have set up with this content to insert the sitemap and greet robots who aren't assholes:
# torchlight! {&#34;lineNumbers&#34;:true} Sitemap: {{ .Site.BaseURL }}sitemap.xml # hello robots [^_^] # let&#39;s be friends &lt;3 User-agent: * Disallow: # except for these bots which are not friends: {{ partial &#34;bad-robots.html&#34; . }} I opted to break the heavy lifting out into layouts/partials/bad-robots.html to keep things a bit tidier in the main template. This starts out simply enough with using resources.GetRemote to fetch the desired JSON file, and printing an error if that doesn't work:
# torchlight! {&#34;lineNumbers&#34;:true} {{- $url := &#34;https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json&#34; -}} {{- with resources.GetRemote $url -}} {{- with .Err -}} {{- errorf &#34;%s&#34; . -}} {{- else -}} The JSON file looks a bit like this, with the user agent strings as the top-level keys:
{ &#34;Amazonbot&#34;: { // [tl! **] &#34;operator&#34;: &#34;Amazon&#34;, &#34;respect&#34;: &#34;Yes&#34;, &#34;function&#34;: &#34;Service improvement and enabling answers for Alexa users.&#34;, &#34;frequency&#34;: &#34;No information. provided.&#34;, &#34;description&#34;: &#34;Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses.&#34; }, &#34;anthropic-ai&#34;: { // [tl! **] &#34;operator&#34;: &#34;[Anthropic](https:\\/\\/www.anthropic.com)&#34;, &#34;respect&#34;: &#34;Unclear at this time.&#34;, &#34;function&#34;: &#34;Scrapes data to train Anthropic&#39;s AI products.&#34;, &#34;frequency&#34;: &#34;No information. provided.&#34;, &#34;description&#34;: &#34;Scrapes data to train LLMs and AI products offered by Anthropic.&#34; }, &#34;Applebot-Extended&#34;: { // [tl! **] &#34;operator&#34;: &#34;[Apple](https:\\/\\/support.apple.com\\/en-us\\/119829#datausage)&#34;, &#34;respect&#34;: &#34;Yes&#34;, &#34;function&#34;: &#34;Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others.&#34;, &#34;frequency&#34;: &#34;Unclear at this time.&#34;, &#34;description&#34;: &#34;Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple&#39;s foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools.&#34; }, {...} } There's quite a bit more detail in this JSON than I really care about; all I need for this are the bot names. So I unmarshal the JSON data, iterate through the top-level keys to extract the names, and print a line starting with User-agent: followed by the name for each bot.
# torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;:6} {{- $robots := unmarshal .Content -}} {{- range $botname, $_ := $robots }} {{- printf &#34;User-agent: %s\\n&#34; $botname }} {{- end }} And once the loop is finished, I print the important Disallow: / rule (and a plug for the repo) and clean up:
# torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;:10} {{- printf &#34;Disallow: /\\n&#34; }} {{- printf &#34;\\n# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)&#34; }} {{- end -}} {{- else -}} {{- errorf &#34;Unable to get remote resource %q&#34; $url -}} {{- end -}} So here's the completed layouts/partials/bad-robots.html:
# torchlight! {&#34;lineNumbers&#34;:true} {{- $url := &#34;https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.json&#34; -}} {{- with resources.GetRemote $url -}} {{- with .Err -}} {{- errorf &#34;%s&#34; . -}} {{- else -}} {{- $robots := unmarshal .Content -}} {{- range $botname, $_ := $robots }} {{- printf &#34;User-agent: %s\\n&#34; $botname }} {{- end }} {{- printf &#34;Disallow: /\\n&#34; }} {{- printf &#34;\\n# (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt)&#34; }} {{- end -}} {{- else -}} {{- errorf &#34;Unable to get remote resource %q&#34; $url -}} {{- end -}} After that's in place, I can fire off a quick hugo server in the shell and check out my work at http://localhost:1313/robots.txt:
Sitemap: http://localhost:1313/sitemap.xml # hello robots [^_^] # let&#39;s be friends &lt;3 User-agent: * Disallow: # except for these bots which are not friends: User-agent: Amazonbot User-agent: Applebot-Extended User-agent: Bytespider User-agent: CCBot User-agent: ChatGPT-User User-agent: Claude-Web User-agent: ClaudeBot User-agent: Diffbot User-agent: FacebookBot User-agent: FriendlyCrawler User-agent: GPTBot User-agent: Google-Extended User-agent: GoogleOther User-agent: GoogleOther-Image User-agent: GoogleOther-Video User-agent: ICC-Crawler User-agent: ImageSift User-agent: Meta-ExternalAgent User-agent: OAI-SearchBot User-agent: PerplexityBot User-agent: PetalBot User-agent: Scrapy User-agent: Timpibot User-agent: VelenPublicWebCrawler User-agent: YouBot User-agent: anthropic-ai User-agent: cohere-ai User-agent: facebookexternalhit User-agent: img2dataset User-agent: omgili User-agent: omgilibot Disallow: / # (bad bots bundled by https://github.com/ai-robots-txt/ai.robots.txt) Neat!
Next Steps Of course, bad agents being disallowed in a robots.txt doesn't really accomplish anything if they're just going to ignore that and scrape my content anyway↗. I closed my last post on the subject with a bit about the Cloudflare WAF rule I had created to actively block these known bad actors. Since then, two things have changed:
First, Cloudflare rolled out an even easier way to block bad bots with a single click↗. If you're using Cloudflare, just enable that and call it day.
Second, this site is now hosted (and fronted) by Bunny so the Cloudflare solutions won't help me anymore.
Instead, I've been using Melanie's handy script↗ to create a Bunny edge rule (similar to the Cloudflare WAF rule) to handle the blocking.
Going forward, I think I'd like to explore using Bunny's new Terraform provider↗ to manage the edge rule↗ in a more stateful way. But that's a topic for another post!
`,description:"Using Hugo resources.GetRemote to fetch a list of bad bots and generate a valid robots.txt at build time.",url:"https://runtimeterror.dev/dynamic-robots-txt-hugo-external-data-sources/"},"https://runtimeterror.dev/taking-taildrive-testdrive/":{title:"Taking Taildrive for a Testdrive",tags:["linux","tailscale"],content:`My little homelab is bit different from many others in that I don't have a SAN/NAS or other dedicated storage setup. This can sometimes make sharing files between systems a little bit tricky. I've used workarounds like Tailscale Serve for sharing files over HTTP or simply scping files around as needed, but none of those solutions are really very elegant.
Last week, Tailscale announced a new integration↗ with ControlD↗ to add advanced DNS filtering and security. While I was getting that set up on my tailnet, I stumbled across an option I hadn't previously noticed in the Tailscale CLI: the tailscale drive command:
Share a directory with your tailnet
USAGE tailscale drive share &lt;name&gt; &lt;path&gt; tailscale drive rename &lt;oldname&gt; &lt;newname&gt; tailscale drive unshare &lt;name&gt; tailscale drive list
Taildrive allows you to share directories with other machines on your tailnet.
That sounded kind of neat - especially once I found the corresponding Taildrive documentation↗ and started to get a better understanding of how this new(ish) feature works:
Normally, maintaining a file server requires you to manage credentials and access rules separately from the connectivity layer. Taildrive offers a file server that unifies connectivity and access controls, allowing you to share directories directly from the Tailscale client. You can then use your tailnet policy file to define which members of your tailnet can access a particular shared directory, and even define specific read and write permissions.
Beginning in version 1.64.0, the Tailscale client includes a WebDAV server that runs on 100.100.100.100:8080 while Tailscale is connected. Every directory that you share receives a globally-unique path consisting of the tailnet, the machine name, and the share name: /tailnet/machine/share.
For example, if you shared a directory with the share name docs from the machine mylaptop on the tailnet mydomain.com, the share's path would be /mydomain.com/mylaptop/docs.
Oh yeah. That will be a huge simplification for how I share files within my tailnet.
I've now had a chance to get this implemented on my tailnet and thought I'd share some notes on how I did it.
ACL Changes My Tailscale policy relies heavily on ACL tags↗ to manage access between systems, especially for &quot;headless&quot; server systems which don't typically have users logged in to them. I don't necessarily want every system to be able to export a file share so I decided to control that capability with a new tag:share flag. Before I could use that tag, though, I had to add it to the ACL↗:
{ &#34;groups&#34;: { &#34;group:admins&#34;: [&#34;[email protected]&#34;], }, &#34;tagOwners&#34;: { &#34;tag:share&#34;: [&#34;group:admins&#34;], }, {...}, } Next I needed to add the appropriate node attributes↗ to enable Taildrive sharing on devices with that tag and Taildrive access for all other systems:
{ &#34;nodeAttrs&#34;: { { // devices with the share tag can share files with Taildrive &#34;target&#34;: [&#34;tag:share&#34;], &#34;attr&#34;: [&#34;drive:share&#34;], }, { // all devices can access shares &#34;target&#34;: [&#34;*&#34;], &#34;attr&#34;: [&#34;drive:access&#34;], }, }, {...}, } And I created a pair of Grants↗ to give logged-in users read-write access and tagged devices read-only access:
{ &#34;grants&#34;:[ { // users get read-write access to shares &#34;src&#34;: [&#34;autogroup:member&#34;], &#34;dst&#34;: [&#34;tag:share&#34;], &#34;app&#34;: { &#34;tailscale.com/cap/drive&#34;: [{ &#34;shares&#34;: [&#34;*&#34;], &#34;access&#34;: &#34;rw&#34; }] } }, { // tagged devices get read-only access &#34;src&#34;: [&#34;autogroup:tagged&#34;], &#34;dst&#34;: [&#34;tag:share&#34;], &#34;app&#34;: { &#34;tailscale.com/cap/drive&#34;: [{ &#34;shares&#34;: [&#34;*&#34;], &#34;access&#34;: &#34;ro&#34; }] } } ], {...}, } That will let me create/manage files from the devices I regularly work on, and easily retrieve them as needed on the others.
Then I just used the Tailscale admin portal to add the new tag:share tag to my existing files node:
Exporting the Share After making the required ACL changes, actually publishing the share was very straightforward. Per the tailscale drive --help output↗, the syntax is:
tailscale drive share &lt;name&gt; &lt;path&gt; # [tl! .cmd] I (somewhat-confusingly) wanted to share a share named share, found at /home/john/share (I might be bad at naming things) so I used this to export it:
tailscale drive share share /home/john/share # [tl! .cmd] And I could verify that share had, in fact, been shared with:
tailscale drive list # [tl! .cmd] name path as # [tl! .nocopy:2] ----- ---------------- ---- share /home/john/share john Mounting the Share In order to mount the share from the Debian Linux development environment on my Chromebook↗, I first needed to install the davfs2 package to add support for mounting WebDAV shares:
sudo apt update # [tl! .cmd:1] sudo apt install davfs2 I need to be able mount the share as my standard user account (without elevation) to ensure that the ownership and permissions are correctly inherited. The davfs2 installer offered to enable the SUID bit to support this, but that change on its own doesn't seem to have been sufficient in my testing. In addition (or perhaps instead?), I had to add my account to the davfs2 group:
sudo usermod -aG davfs2 $USER # [tl! .cmd] And then use the newgrp command to load the new membership without having to log out and back in again:
newgrp davfs2 # [tl! .cmd] Next I created a folder inside my home directory to use as a mountpoint:
mkdir ~/taildrive # [tl! .cmd] I knew from the Taildrive docs↗ that the WebDAV server would be running at http://100.100.100.100:8080 and the share would be available at /&lt;tailnet&gt;/&lt;machine&gt;/&lt;share&gt;, so I added the following to my /etc/fstab:
http://100.100.100.100:8080/example.com/files/share /home/john/taildrive/ davfs user,rw,noauto 0 0 Then I ran sudo systemctl daemon-reload to make sure the system knew about the changes to the fstab.
Taildrive's WebDAV implementation doesn't require any additional authentication (that's handled automatically by Tailscale), but davfs2 doesn't know that. So to keep it from prompting unnecessarily for credentials when attempting to mount the taildrive, I added this to the bottom of ~/.davfs2/secrets, with empty strings taking the place of the username and password:
/home/john/taildrive &#34;&#34; &#34;&#34; After that, I could mount the share like so:
mount ~/taildrive # [tl! .cmd] And verify that I could see the files being shared from share on files:
ls -l ~/taildrive # [tl! .cmd] drwxr-xr-x - john 15 Feb 09:20 books # [tl! .nocopy:5] drwxr-xr-x - john 22 Oct 2023 dist drwx------ - john 28 Jul 15:10 lost+found drwxr-xr-x - john 22 Nov 2023 media drwxr-xr-x - john 16 Feb 2023 notes .rw-r--r-- 18 john 10 Jan 2023 status Neat, right?
I'd like to eventually get this set up so that AutoFS↗ can handle mounting the Taildrive WebDAV share on the fly. I know that won't work within the containerized Linux environment on my Chromebook↗ but I think it should be possible on an actual Linux system. My initial efforts were unsuccessful though; I'll update this post if I figure it out.
In the meantime, though, this will be a more convenient way for me to share files between my Tailscale-connected systems.
`,description:"A quick exploration of Taildrive, Tailscale's new(ish) feature to easily share directories with other machines on your tailnet without having to juggle authentication or network connectivity.",url:"https://runtimeterror.dev/taking-taildrive-testdrive/"},"https://runtimeterror.dev/automate-packer-builds-github-actions/":{title:"Automate Packer Builds with GithHub Actions",tags:["api","automation","cicd","containers","docker","iac","linux","packer","proxmox","selfhosting","shell","tailscale"],content:`I recently shared how I set up Packer to build Proxmox templates in my homelab. That post covered storing (and retrieving) environment-specific values in Vault, the cloud-init configuration for defining the installation parameters, the various post-install scripts for further customizing and hardening the template, and the Packer template files that tie it all together. By the end of the post, I was able to simply run ./build.sh ubuntu2204 to kick the build of a new Ubuntu 22.04 template without having to do any other interaction with the process.
That's pretty cool, but The Dream is to not have to do anything at all. So that's what this post is about: setting up a self-hosted GitHub Actions Runner to perform the build and a GitHub Actions workflow to trigger it.
Self-Hosted Runner When a GitHub Actions workflow fires, it schedules the job(s) to run on GitHub's own infrastructure. That's easy and convenient, but can make things tricky when you need a workflow to interact with on-prem infrastructure. I've worked around that in the past by configuring the runner to connect to my tailnet, but given the amount of data that will need to be transferred during the Packer build I decided that a self-hosted runner↗ would be a better solution.
I wanted my runner to execute the build inside of a Docker container for better control of the environment, and I wanted that container to run without elevated permissions (rootless)↗.
Self-Hosted Runner Security
GitHub strongly recommends↗ that you only use self-hosted runners with private repositories. You don't want a misconfigured workflow to allow a pull request submitted from a fork to run potentially-malicious code on your system(s).
So while I have a public repo↗ to share my Packer work, my runner environment is attached to an otherwise-identical private repo. I'd recommend following a similar setup.
Setup Rootless Docker Host I start by cloning a fresh Ubuntu 22.04 VM off of my new template. After doing the basic initial setup (setting the hostname and IP, connecting it Tailscale, and so on), I create a user account for the runner to use. That account will need sudo privileges during the initial setup, but those will be revoked later on. I also set a password for the account.
sudo useradd -m -G sudo -s $(which bash) github # [tl! .cmd:1] sudo passwd github I then install the systemd-container package so that I can use machinectl↗ to log in as the new user (since sudo su won't work for the rootless setup↗).
sudo apt update # [tl! .cmd:2] sudo apt install systemd-container sudo machinectl shell github@ And I install the uidmap package since rootless Docker requires newuidmap and newgidmap:
sudo apt install uidmap # [tl! .cmd] At this point, I can follow the usual Docker installation instructions↗:
# Add Docker&#39;s official GPG key: sudo apt-get update # [tl! .cmd:4] sudo apt-get install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to apt sources: echo \\ # [tl! .cmd] &#34;deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \\ $(. /etc/os-release &amp;&amp; echo &#34;$VERSION_CODENAME&#34;) stable&#34; | \\ sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null sudo apt-get update # [tl! .cmd] # Install the Docker packages: sudo apt-get install \\ # [tl! .cmd] docker-ce \\ docker-ce-cli \\ containerd.io \\ docker-buildx-plugin \\ docker-compose-plugin Now it's time for the rootless setup, which starts by disabling the existing Docker service and socket and then running the dockerd-rootless-setuptool.sh script:
sudo systemctl disable --now docker.service docker.socket # [tl! .cmd:1] sudo rm /var/run/docker.sock dockerd-rootless-setuptool.sh install # [tl! .cmd] Next, I enable and start the service in the user context, and I enable &quot;linger&quot; for the github user so that its systemd instance can continue to function even while the user is not logged in:
systemctl --user enable --now docker # [tl! .cmd:1] sudo loginctl enable-linger $(whoami) That should take care of setting up Docker, and I can quickly confirm by spawning the usual hello-world container:
docker run hello-world # [tl! .cmd] Unable to find image &#39;hello-world:latest&#39; locally # [tl! .nocopy:25] latest: Pulling from library/hello-world c1ec31eb5944: Pull complete Digest: sha256:1408fec50309afee38f3535383f5b09419e6dc0925bc69891e79d84cc4cdcec6 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the &#34;hello-world&#34; image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/ So the Docker piece is sorted; now for setting up the runner.
Install/Configure Runner I know I've been talking about a singular runner, but I'm actually setting up multiple instances of the runner on the same host to allow running jobs in parallel. I could probably support four simultaneous builds in my homelab but I'll start with just two runners for now (after all, I only have two build flavors so far anyway).
Each runner instance needs its own directory so I create those under /opt/github/:
sudo mkdir -p /opt/github/runner{1..2} # [tl! .cmd:2] sudo chown -R github:github /opt/github cd /opt/github And then I download the latest runner package↗:
curl -O -L https://github.com/actions/runner/releases/download/v2.317.0/actions-runner-linux-x64-2.317.0.tar.gz # [tl! .cmd] For each runner, I:
Extract the runner software into the designated directory and cd into it: tar xzf ./actions-runner-linux-x64-2.317.0.tar.gz --directory=runner1 # [tl! .cmd:1] cd runner1 Go to my private GitHub repo, navigate to Settings &gt; Actions &gt; Runners, and click the big friendly New self-hosted runner button at the top-right of the page. All I really need from that is the token which appears in the Configure section. Once I have that token, I... Run the configuration script, accepting the defaults for every prompt except for the runner name, which must be unique within the repository (so runner1, runner2, so on): ./config.sh \\ # [tl! **:2 .cmd] --url https://github.com/[GITHUB_USERNAME]/[GITHUB_REPO] \\ --token [TOKEN] # [tl! .nocopy:1,35] -------------------------------------------------------------------------------- | ____ _ _ _ _ _ _ _ _ | | / ___(_) |_| | | |_ _| |__ / \\ ___| |_(_) ___ _ __ ___ | | | | _| | __| |_| | | | | &#39;_ \\ / _ \\ / __| __| |/ _ \\| &#39;_ \\/ __| | | | |_| | | |_| _ | |_| | |_) | / ___ \\ (__| |_| | (_) | | | \\__ \\ | | \\____|_|\\__|_| |_|\\__,_|_.__/ /_/ \\_\\___|\\__|_|\\___/|_| |_|___/ | | | | Self-hosted runner registration | | | -------------------------------------------------------------------------------- # Authentication √ Connected to GitHub # Runner Registration Enter the name of the runner group to add this runner to: [press Enter for Default] Enter the name of runner: [press Enter for runner] runner1 # [tl! ** ~~] This runner will have the following labels: &#39;self-hosted&#39;, &#39;Linux&#39;, &#39;X64&#39; Enter any additional labels (ex. label-1,label-2): [press Enter to skip] √ Runner successfully added √ Runner connection is good # Runner settings Enter name of work folder: [press Enter for _work] √ Settings Saved. Use the svc.sh script to install it as a user service, and start it running as the github user: sudo ./svc.sh install $(whoami) # [tl! .cmd:1] sudo ./svc.sh start $(whoami) Once all of the runner instances are configured I can remove the github user from the sudo group:
sudo deluser github sudo # [tl! .cmd] And I can see that my new runners are successfully connected to my private GitHub repo: I now have a place to execute the Packer builds, I just need to tell the runner how to do that. And that's means it's time to talk about the...
GitHub Actions Workflow My solution for this consists of a Github Actions workflow which calls a custom action to spawn a Docker container and do the work. Let's start with the innermost component (the Docker image) and work out from there.
Docker Image I'm using a customized Docker image consisting of Packer and associated tools with the addition of the wrapper script that I used for local builds. That image will be integrated with a custom action called packerbuild.
So I'll create a folder to hold my new action (and Dockerfile):
mkdir -p .github/actions/packerbuild # [tl! .cmd] I don't want to maintain two copies of the build.sh script, so I move it into this new folder and create a symlink to it back at the top of the repo:
mv build.sh .github/actions/packerbuild/ # [tl! .cmd:1] ln -s .github/actions/packerbuild/build.sh build.sh That way I can easily load the script into the Docker image while also having it available for running on-demand local builds as needed.
And as a quick reminder, that build.sh script accepts a single argument to specify what build to produce and then fires off the appropriate Packer commands:
# torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # Run a single packer build # # Specify the build as an argument to the script. Ex: # ./build.sh ubuntu2204 set -eu if [ $# -ne 1 ]; then echo &#34;&#34;&#34; Syntax: $0 [BUILD] Where [BUILD] is one of the supported OS builds: ubuntu2204 ubuntu2404 &#34;&#34;&#34; exit 1 fi if [ ! &#34;\${VAULT_TOKEN+x}&#34; ]; then #shellcheck disable=SC1091 source vault-env.sh || ( echo &#34;No Vault config found&#34;; exit 1 ) fi build_name=&#34;\${1,,}&#34; build_path= case $build_name in ubuntu2204) build_path=&#34;builds/linux/ubuntu/22-04-lts/&#34; ;; ubuntu2404) build_path=&#34;builds/linux/ubuntu/24-04-lts/&#34; ;; *) echo &#34;Unknown build; exiting...&#34; exit 1 ;; esac packer init &#34;\${build_path}&#34; packer build -on-error=cleanup -force &#34;\${build_path}&#34; I use the following Dockerfile to create the environment in which the build will be executed:
# torchlight! {&#34;lineNumbers&#34;:true} FROM alpine:3.20 ENV PACKER_VERSION=1.10.3 RUN apk --no-cache upgrade \\ &amp;&amp; apk add --no-cache \\ bash \\ curl \\ git \\ openssl \\ wget \\ xorriso ADD https://releases.hashicorp.com/packer/\${PACKER_VERSION}/packer_\${PACKER_VERSION}_linux_amd64.zip ./ ADD https://releases.hashicorp.com/packer/\${PACKER_VERSION}/packer_\${PACKER_VERSION}_SHA256SUMS ./ RUN sed -i &#39;/.*linux_amd64.zip/!d&#39; packer_\${PACKER_VERSION}_SHA256SUMS \\ &amp;&amp; sha256sum -c packer_\${PACKER_VERSION}_SHA256SUMS \\ &amp;&amp; unzip packer_\${PACKER_VERSION}_linux_amd64.zip -d /bin \\ &amp;&amp; rm -f packer_\${PACKER_VERSION}_linux_amd64.zip packer_\${PACKER_VERSION}_SHA256SUMS COPY build.sh /bin/build.sh RUN chmod +x /bin/build.sh ENTRYPOINT [&#34;/bin/build.sh&#34;] It starts with a minimal alpine base image and installs a few common packages (and xorriso to support the creation of ISO images). It then downloads the indicated version of the Packer installer and extracts it to /bin/. Finally it copies the build.sh script into the image and sets it as the ENTRYPOINT.
Custom Action Turning this Docker image into an action requires just a smidge of YAML to describe how to interact with the image.
Behold, .github/actions/packerbuild/action.yml:
# torchlight! {&#34;lineNumbers&#34;:true} name: &#39;Execute Packer Build&#39; description: &#39;Performs a Packer build&#39; inputs: build-flavor: description: &#39;The build to execute&#39; required: true runs: using: &#39;docker&#39; image: &#39;Dockerfile&#39; args: - \${{ inputs.build-flavor }} As you can see, the action expects (nay, requires!) a build-flavor input to line up with build.sh's expected parameter. The action will run in Docker using the image defined in the local Dockerfile, and will pass \${{ inputs.build-flavor }} as the sole argument to that image.
Alright, let's tie it all together with the automation workflow now.
The Workflow The workflow is defined in .github/workflows/build.yml. It starts simply enough with a name and an explanation of when the workflow should be executed.
# torchlight! {&#34;lineNumbers&#34;:true} name: Build VM Templates on: workflow_dispatch: schedule: - cron: &#39;0 8 * * 1&#39; workflow_dispatch sets it so I can manually execute the workflow from the GitHub Actions UI (for testing / as a treat), and the cron schedule configures the workflow to run automatically every Monday at 8:00 AM (UTC).
Rather than rely on an environment file (ew), I'm securely storing the VAULT_ADDR and VAULT_TOKEN values in GitHub repository secrets↗. So I introduce those values into the workflow like so:
# torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;:8} env: VAULT_ADDR: \${{ secrets.VAULT_ADDR }} VAULT_TOKEN: \${{ secrets.VAULT_TOKEN }} When I did the Vault setup, I created the token with a period of 336 hours; that means that the token will only remain valid as long as it gets renewed at least once every two weeks. So I start the jobs: block with a simple call to Vault's REST API↗ to renew the token before each run:
# torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;:12} jobs: prepare: name: Prepare runs-on: self-hosted steps: - name: Renew Vault Token run: | curl -s --header &#34;X-Vault-Token:\${VAULT_TOKEN}&#34; \\ --request POST &#34;\${VAULT_ADDR}v1/auth/token/renew-self&#34; | grep -q auth Assuming that token is renewed successfully, the Build job uses a matrix strategy↗ to enumerate the build-flavors that will need to be built. All of the following steps will be repeated for each flavor.
And the first step is to simply check out the GitHub repo so that the runner has all the latest code.
# torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;:22} builds: name: Build needs: prepare runs-on: self-hosted strategy: matrix: build-flavor: - ubuntu2204 - ubuntu2404 steps: - name: Checkout uses: actions/checkout@v4 To get the runner to interact with the rootless Docker setup we'll need to export the DOCKER_HOST variable and point it to the Docker socket registered by the user... which first means obtaining the UID of that user and echoing it to the special $GITHUB_OUTPUT variable so it can be passed to the next step:
# torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;:34} - name: Get UID of Github user id: runner_uid run: | echo &#34;gh_uid=$(id -u)&#34; &gt;&gt; &#34;$GITHUB_OUTPUT&#34; And now, finally, for the actual build. The Build template step calls the .github/actions/packerbuild custom action, sets the DOCKER_HOST value to the location of docker.sock (using the UID obtained earlier) so the runner will know how to interact with rootless Docker, and passes along the build-flavor from the matrix to influence which template will be created.
If it fails for some reason, the Retry on failure step will try again, just in case it was a transient glitch like a network error or a hung process.
# torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;:38} - name: Build template id: build uses: ./.github/actions/packerbuild timeout-minutes: 90 env: DOCKER_HOST: unix:///run/user/\${{ steps.runner_uid.outputs.gh_uid }}/docker.sock with: build-flavor: \${{ matrix.build-flavor }} continue-on-error: true - name: Retry on failure id: retry if: steps.build.outcome == &#39;failure&#39; uses: ./.github/actions/packerbuild timeout-minutes: 90 env: DOCKER_HOST: unix:///run/user/\${{ steps.runner_uid.outputs.gh_uid }}/docker.sock with: build-flavor: \${{ matrix.build-flavor }} Here's the complete .github/workflows/build.yml, all in one code block:
# torchlight! {&#34;lineNumbers&#34;:true} name: Build VM Templates on: workflow_dispatch: schedule: - cron: &#39;0 8 * * 1&#39; env: VAULT_ADDR: \${{ secrets.VAULT_ADDR }} VAULT_TOKEN: \${{ secrets.VAULT_TOKEN }} jobs: prepare: name: Prepare runs-on: self-hosted steps: - name: Renew Vault Token run: | curl -s --header &#34;X-Vault-Token:\${VAULT_TOKEN}&#34; \\ --request POST &#34;\${VAULT_ADDR}v1/auth/token/renew-self&#34; | grep -q auth builds: name: Build needs: prepare runs-on: self-hosted strategy: matrix: build-flavor: - ubuntu2204 - ubuntu2404 steps: - name: Checkout uses: actions/checkout@v4 - name: Get UID of Github user id: runner_uid run: | echo &#34;gh_uid=$(id -u)&#34; &gt;&gt; &#34;$GITHUB_OUTPUT&#34; - name: Build template id: build uses: ./.github/actions/packerbuild timeout-minutes: 90 env: DOCKER_HOST: unix:///run/user/\${{ steps.runner_uid.outputs.gh_uid }}/docker.sock with: build-flavor: \${{ matrix.build-flavor }} continue-on-error: true - name: Retry on failure id: retry if: steps.build.outcome == &#39;failure&#39; uses: ./.github/actions/packerbuild timeout-minutes: 90 env: DOCKER_HOST: unix:///run/user/\${{ steps.runner_uid.outputs.gh_uid }}/docker.sock with: build-flavor: \${{ matrix.build-flavor }} Your Templates Are Served All that's left at this point is to git commit and git push this to my private repo. I can then visit the repo on the web, go to the Actions tab, select the new Build VM Templates workflow on the left, and click the Run workflow button. That fires off the build, and I can check back a few minutes later to confirm that it completed successfully:
And I can also consult with my Proxmox host and confirm that the new VM templates were indeed created:
For future builds, I don't have to actually do anything at all. GitHub will automatically trigger this workflow every Monday morning so my templates will never be more than a week out-of-date. Pretty slick, right?
You can check out my public repo at github.com/jbowdre/packer-proxmox-templates/↗ to explore the full setup - and to follow along as I add support for additional OS flavors.
`,description:"Using a GitHub Actions workflow, self-hosted runners, rootless Docker, Packer, and Vault to automatically build VM templates on Proxmox.",url:"https://runtimeterror.dev/automate-packer-builds-github-actions/"},"https://runtimeterror.dev/building-proxmox-templates-packer/":{title:"Building Proxmox Templates with Packer",tags:["homelab","iac","linux","packer","proxmox","tailscale","vault"],content:`I've been using Proxmox in my homelab for a while now, and I recently expanded the environment with two HP Elite Mini 800 G9 computers. It was time to start automating the process of building and maintaining my VM templates. I already had functional Packer templates for VMware↗ so I used that as a starting point for the Proxmox builds↗. So far, I've only ported over the Ubuntu builds; I'm telling myself I'll get the rest moved over after finally publishing this post.
Once I got the builds working locally, I explored how to automate them. I set up a GitHub Actions workflow and a rootless runner to perform the builds for me. I wrote up some notes on that part of the process here, but first, let's run through how I set up Packer. That will be plenty to chew on for now.
This post will cover a lot of the Packer implementation details but may gloss over some general setup steps; you'll need at least a passing familiarity with Packer↗ and Vault↗ to take this on.
Component Overview There are several important parts to this setup, so let's start by quickly running through those:
a Proxmox host to serve the virtual infrastructure and provide compute for the new templates, a Vault instance running in a container in the lab to hold the secrets needed for the builds, and some Packer content for actually building the templates. Proxmox Setup The only configuration I did on the Proxmox side was to create a user account↗ that Packer could use. I called it packer but didn't set a password for it. Instead, I set up an API token↗ for that account, making sure to uncheck the &quot;Privilege Separation&quot; box so that the token would inherit the same permissions as the user itself.
To use the token, I needed the ID (in the form USERNAME@REALM!TOKENNAME) and the UUID-looking secret, which is only displayed once, so I made sure to record it in a safe place.
Speaking of privileges, the Proxmox ISO integration documentation↗ doesn't offer any details on the minimum required permissions, and none of my attempts worked until I eventually assigned the Administrator role to the packer user. I plan on doing more testing to narrow the scope before running this in production, but this will do for my homelab purposes.
Otherwise, I just needed to figure out the details like which network bridge, ISO storage, and VM storage the Packer-built VMs should use.
Vault Configuration I use Vault↗ to hold the configuration details for the template builds - not just traditional secrets like usernames and passwords, but basically every environment-specific setting as well. This approach lets others use my Packer code without having to change much (if any) of it; every value that I expect to change between environments is retrieved from Vault at runtime.
Because this is just a homelab, I'm using Vault in Docker↗, and I'm making it available within my tailnet with Tailscale Serve using the following docker-compose.yaml
# torchlight! {&#34;lineNumbers&#34;:true} services: tailscale: image: tailscale/tailscale:latest container_name: vault-tailscaled restart: unless-stopped environment: TS_AUTHKEY: \${TS_AUTHKEY:?err} TS_HOSTNAME: vault TS_STATE_DIR: &#34;/var/lib/tailscale/&#34; TS_SERVE_CONFIG: /config/serve-config.json volumes: - ./ts_data:/var/lib/tailscale/ - ./serve-config.json:/config/serve-config.json vault: image: hashicorp/vault container_name: vault restart: unless-stopped environment: VAULT_ADDR: &#39;https://0.0.0.0:8200&#39; cap_add: - IPC_LOCK volumes: - ./data:/vault/data - ./config:/vault/config - ./log:/vault/log command: vault server -config=/vault/config/vault.hcl network_mode: &#34;service:tailscale&#34; I use the following ./config/vault.hcl to set the Vault server configuration:
ui = true listener &#34;tcp&#34; { address = &#34;0.0.0.0:8200&#34; tls_disable = &#34;true&#34; } storage &#34;file&#34; { path = &#34;/vault/data&#34; } And this ./serve-config.json to tell Tailscale that it should proxy the Vault container's port 8200 and make it available on my tailnet at https://vault.tailnet-name.ts.net/:
# torchlight! {&#34;lineNumbers&#34;:true} { &#34;TCP&#34;: { &#34;443&#34;: { &#34;HTTPS&#34;: true } }, &#34;Web&#34;: { &#34;vault.tailnet-name.ts.net:443&#34;: { &#34;Handlers&#34;: { &#34;/&#34;: { &#34;Proxy&#34;: &#34;http://127.0.0.1:8200&#34; } } } } } After performing the initial Vault setup, I then created a kv-v2↗ secrets engine for Packer to use:
vault secrets enable -path=packer kv-v2 # [tl! .cmd] Success! Enabled the kv-v2 secrets engine at: packer/ # [tl! .nocopy] I defined a policy↗ which will grant the bearer read-only access to the data stored in the packer secrets as well as the ability to create and update its own token:
cat &lt;&lt; EOF | vault policy write packer - path &#34;packer/*&#34; { capabilities = [&#34;read&#34;, &#34;list&#34;] } path &#34;auth/token/renew-self&#34; { capabilities = [&#34;update&#34;] } path &#34;auth/token/create&#34; { capabilities = [&#34;create&#34;, &#34;update&#34;] } EOF # [tl! .cmd:-12,1] Success! Uploaded policy: packer2 # [tl! .nocopy] Now I just need to create a token attached to the policy:
vault token create -policy=packer -no-default-policy \\ -orphan -ttl=4h -period=336h -display-name=packer # [tl! .cmd:-1,1 ] Key Value # [tl! .nocopy:8] --- ----- token hvs.CAES[...]GSFQ token_accessor aleV[...]xu5I token_duration 336h token_renewable true token_policies [&#34;packer&#34;] identity_policies [] policies [&#34;packer&#34;] The token will only be displayed this once so I make sure to copy it somewhere safe.
Within the packer secrets engine, I have two secrets which each have a number of subkeys.
proxmox contains values related to the Proxmox environment:
Key Example value Description api_url https://prox.tailnet-name.ts.net/api2/json/ URL to the Proxmox API insecure_connection true set to false if your Proxmox host has a valid certificate iso_path local:iso path for (existing) ISO storage iso_storage_pool local pool for storing created/uploaded ISOs network_bridge vmbr0 bridge the VM's NIC will be attached to node proxmox1 node name where the VM will be built token_id packer@pve!packer ID for an API token↗, in the form USERNAME@REALM!TOKENNAME token_secret 3fc69f[...]d2077eda secret key for the token vm_storage_pool zfs-pool storage pool where the VM will be created linux holds values for the created VM template(s)
Key Example value Description bootloader_password bootplease Grub bootloader password to set password_hash $6$rounds=4096$NltiNLKi[...]a7Shax41 hash of the build account's password (example generated with mkpasswd -m sha512crypt -R 4096) public_key ssh-ed25519 AAAAC3NzaC1[...]lXLUI5I40 [email protected] SSH public key for the user username admin build account username Packer Content The layout of my Packer Proxmox repo↗ looks something like this:
. ├── builds │ └── linux │ └── ubuntu │ ├── 22-04-lts │ │ ├── data │ │ │ ├── meta-data │ │ │ └── user-data.pkrtpl.hcl │ │ ├── hardening.sh │ │ ├── linux-server.auto.pkrvars.hcl │ │ ├── linux-server.pkr.hcl │ │ └── variables.pkr.hcl │ └── 24-04-lts [tl! collapse:7 ] │ ├── data │ │ ├── meta-data │ │ └── user-data.pkrtpl.hcl │ ├── hardening.sh │ ├── linux-server.auto.pkrvars.hcl │ ├── linux-server.pkr.hcl │ └── variables.pkr.hcl ├── certs ├── scripts │ └── linux [tl! collapse:16 ] │ ├── cleanup-cloud-init.sh │ ├── cleanup-packages.sh │ ├── cleanup-subiquity.sh │ ├── configure-pam_mkhomedir.sh │ ├── configure-sshd.sh │ ├── disable-multipathd.sh │ ├── generalize.sh │ ├── install-ca-certs.sh │ ├── install-cloud-init.sh │ ├── join-domain.sh │ ├── persist-cloud-init-net.sh │ ├── prune-motd.sh │ ├── set-homedir-privacy.sh │ ├── update-packages.sh │ ├── wait-for-cloud-init.sh │ └── zero-disk.sh ├── build.sh └── vault-env.sh .github/ holds the actions and workflows that will perform the automated builds. I'll cover this later. builds/ contains subfolders for OS types (Linux or Windows (eventually)) and then separate subfolders for each flavor. linux/ubuntu/22-04-lts/ holds everything related to the Ubuntu 22.04 build: data/meta-data is an empty placeholder, data/user-data.pkrtpl.hcl is a template file for cloud-init to perform the initial install, hardening.sh is a script to perform basic security hardening, variables.pkr.hcl describes all the variables for the build, linux-server.auto.pkrvars.hcl assigns values to each of those variables, and linux-server.pkr.hcl details the steps for actually performing the build. certs/ is empty in my case but could contain CA certificates that need to be installed in the template. scripts/linux/ contains a variety of scripts that will be executed by Packer as part of the build. build.sh is a wrapper script which helps with running the builds locally. vault-env.sh exports variables for connecting to my Vault instance for use by build.sh. Input Variable Definitions Let's take a quick look at the variable definitions in variables.pkr.hcl first. All it does is define the available variables along with their type, provide a brief description about what the variable should hold or be used for, and set sane defaults for some of them.
Input Variables and Local Variables
There are two types of variables used with Packer:
Input Variables↗ may have defined defaults, can be overridden, but cannot be changed after that initial override. They serve as build parameters, allowing aspects of the build to be altered without having to change the source code. Local Variables↗ are useful for assigning a name to an expression. These expressions are evaluated at runtime and can work with input variables, other local variables, data sources, and built-in functions. Input variables are great for those predefined values, while local variables can be really handy for stuff that needs to be more dynamic.
# torchlight! {&#34;lineNumbers&#34;:true} /* Ubuntu Server 22.04 LTS variables using the Packer Builder for Proxmox. */ // BLOCK: variable // Defines the input variables. // Virtual Machine Settings variable &#34;remove_cdrom&#34; { type = bool description = &#34;Remove the virtual CD-ROM(s).&#34; default = true } variable &#34;vm_name&#34; { type = string description = &#34;Name of the new template to create.&#34; } variable &#34;vm_cpu_cores&#34; { type = number description = &#34;The number of virtual CPUs cores per socket. (e.g. &#39;1&#39;)&#34; } variable &#34;vm_cpu_count&#34; { type = number description = &#34;The number of virtual CPUs. (e.g. &#39;2&#39;)&#34; } variable &#34;vm_cpu_type&#34; { # [tl! collapse:start] type = string description = &#34;The virtual machine CPU type. (e.g. &#39;host&#39;)&#34; } variable &#34;vm_disk_size&#34; { type = string description = &#34;The size for the virtual disk (e.g. &#39;60G&#39;)&#34; default = &#34;60G&#34; } variable &#34;vm_bios_type&#34; { type = string description = &#34;The virtual machine BIOS type (e.g. &#39;ovmf&#39; or &#39;seabios&#39;)&#34; default = &#34;ovmf&#34; } variable &#34;vm_guest_os_keyboard&#34; { type = string description = &#34;The guest operating system keyboard input.&#34; default = &#34;us&#34; } variable &#34;vm_guest_os_language&#34; { type = string description = &#34;The guest operating system lanugage.&#34; default = &#34;en_US&#34; } variable &#34;vm_guest_os_timezone&#34; { type = string description = &#34;The guest operating system timezone.&#34; default = &#34;UTC&#34; } variable &#34;vm_guest_os_type&#34; { type = string description = &#34;The guest operating system type. (e.g. &#39;l26&#39; for Linux 2.6+)&#34; } variable &#34;vm_mem_size&#34; { type = number description = &#34;The size for the virtual memory in MB. (e.g. &#39;2048&#39;)&#34; } variable &#34;vm_network_model&#34; { type = string description = &#34;The virtual network adapter type. (e.g. &#39;e1000&#39;, &#39;vmxnet3&#39;, or &#39;virtio&#39;)&#34; default = &#34;virtio&#34; } variable &#34;vm_scsi_controller&#34; { type = string description = &#34;The virtual SCSI controller type. (e.g. &#39;virtio-scsi-single&#39;)&#34; default = &#34;virtio-scsi-single&#34; } // VM Guest Partition Sizes variable &#34;vm_guest_part_audit&#34; { type = number description = &#34;Size of the /var/log/audit partition in MB.&#34; } variable &#34;vm_guest_part_boot&#34; { type = number description = &#34;Size of the /boot partition in MB.&#34; } variable &#34;vm_guest_part_efi&#34; { type = number description = &#34;Size of the /boot/efi partition in MB.&#34; } variable &#34;vm_guest_part_home&#34; { type = number description = &#34;Size of the /home partition in MB.&#34; } variable &#34;vm_guest_part_log&#34; { type = number description = &#34;Size of the /var/log partition in MB.&#34; } variable &#34;vm_guest_part_root&#34; { type = number description = &#34;Size of the /var partition in MB. Set to 0 to consume all remaining free space.&#34; default = 0 } variable &#34;vm_guest_part_swap&#34; { type = number description = &#34;Size of the swap partition in MB.&#34; } variable &#34;vm_guest_part_tmp&#34; { type = number description = &#34;Size of the /tmp partition in MB.&#34; } variable &#34;vm_guest_part_var&#34; { type = number description = &#34;Size of the /var partition in MB.&#34; } variable &#34;vm_guest_part_vartmp&#34; { type = number description = &#34;Size of the /var/tmp partition in MB.&#34; } // Removable Media Settings variable &#34;cd_label&#34; { type = string description = &#34;CD Label&#34; default = &#34;cidata&#34; } variable &#34;iso_checksum_type&#34; { type = string description = &#34;The checksum algorithm used by the vendor. (e.g. &#39;sha256&#39;)&#34; } variable &#34;iso_checksum_value&#34; { type = string description = &#34;The checksum value provided by the vendor.&#34; } variable &#34;iso_file&#34; { type = string description = &#34;The file name of the ISO image used by the vendor. (e.g. &#39;ubuntu-&lt;version&gt;-live-server-amd64.iso&#39;)&#34; } variable &#34;iso_url&#34; { type = string description = &#34;The URL source of the ISO image. (e.g. &#39;https://mirror.example.com/.../os.iso&#39;)&#34; } // Boot Settings variable &#34;vm_boot_command&#34; { type = list(string) description = &#34;The virtual machine boot command.&#34; default = [] } variable &#34;vm_boot_wait&#34; { type = string description = &#34;The time to wait before boot.&#34; } // Communicator Settings and Credentials variable &#34;build_remove_keys&#34; { type = bool description = &#34;If true, Packer will attempt to remove its temporary key from ~/.ssh/authorized_keys and /root/.ssh/authorized_keys&#34; default = true } variable &#34;communicator_insecure&#34; { type = bool description = &#34;If true, do not check server certificate chain and host name&#34; default = true } variable &#34;communicator_port&#34; { type = string description = &#34;The port for the communicator protocol.&#34; } variable &#34;communicator_ssl&#34; { type = bool description = &#34;If true, use SSL&#34; default = true } variable &#34;communicator_timeout&#34; { type = string description = &#34;The timeout for the communicator protocol.&#34; } // Provisioner Settings variable &#34;cloud_init_apt_packages&#34; { type = list(string) description = &#34;A list of apt packages to install during the subiquity cloud-init installer.&#34; default = [] } variable &#34;cloud_init_apt_mirror&#34; { type = string description = &#34;Sets the default apt mirror during the subiquity cloud-init installer.&#34; default = &#34;&#34; } variable &#34;post_install_scripts&#34; { type = list(string) description = &#34;A list of scripts and their relative paths to transfer and run after OS install.&#34; default = [] } variable &#34;pre_final_scripts&#34; { type = list(string) description = &#34;A list of scripts and their relative paths to transfer and run before finalization.&#34; default = [] } # [tl! collapse:end] (Collapsed because I think you get the idea, but feel free to expand to view the whole thing.)
Input Variable Assignments Now that I've told Packer about the variables I intend to use, I can then go about setting values for those variables. That's done in the linux-server.auto.pkrvars.hcl file. I've highlighted the most interesting bits:
# torchlight! {&#34;lineNumbers&#34;:true} /* # Ubuntu Server 22.04 LTS variables used by the Packer Builder for Proxmox. */ // Guest Operating System Metadata vm_guest_os_keyboard = &#34;us&#34; vm_guest_os_language = &#34;en_US&#34; vm_guest_os_timezone = &#34;America/Chicago&#34; // Virtual Machine Guest Operating System Setting vm_guest_os_type = &#34;l26&#34; //Virtual Machine Guest Partition Sizes (in MB) vm_guest_part_audit = 4096 # [tl! ~~:9] vm_guest_part_boot = 512 vm_guest_part_efi = 512 vm_guest_part_home = 8192 vm_guest_part_log = 4096 vm_guest_part_root = 0 vm_guest_part_swap = 1024 vm_guest_part_tmp = 4096 vm_guest_part_var = 8192 vm_guest_part_vartmp = 1024 // Virtual Machine Hardware Settings vm_cpu_cores = 1 # [tl! ~~:8] vm_cpu_count = 2 vm_cpu_type = &#34;host&#34; vm_disk_size = &#34;60G&#34; # vm_bios_type = &#34;ovmf&#34; vm_mem_size = 2048 # vm_name = &#34;Ubuntu2204&#34; vm_network_card = &#34;virtio&#34; vm_scsi_controller = &#34;virtio-scsi-single&#34; // Removable Media Settings iso_checksum_type = &#34;sha256&#34; # [tl! ~~:3] iso_checksum_value = &#34;45f873de9f8cb637345d6e66a583762730bbea30277ef7b32c9c3bd6700a32b2&#34; # iso_file = &#34;ubuntu-22.04.4-live-server-amd64.iso&#34; iso_url = &#34;https://releases.ubuntu.com/jammy/ubuntu-22.04.4-live-server-amd64.iso&#34; remove_cdrom = true // Boot Settings boot_key_interval = &#34;250ms&#34; vm_boot_wait = &#34;4s&#34; vm_boot_command = [ # [tl! ~~:8] &#34;&lt;esc&gt;&lt;wait&gt;c&#34;, &#34;linux /casper/vmlinuz --- autoinstall ds=\\&#34;nocloud\\&#34;&#34;, &#34;&lt;enter&gt;&lt;wait5s&gt;&#34;, &#34;initrd /casper/initrd&#34;, &#34;&lt;enter&gt;&lt;wait5s&gt;&#34;, &#34;boot&#34;, &#34;&lt;enter&gt;&#34; ] // Communicator Settings communicator_port = 22 communicator_timeout = &#34;25m&#34; // Provisioner Settings cloud_init_apt_packages = [ # [tl! ~~:7] &#34;cloud-guest-utils&#34;, &#34;net-tools&#34;, &#34;perl&#34;, &#34;qemu-guest-agent&#34;, &#34;vim&#34;, &#34;wget&#34; ] post_install_scripts = [ # [tl! ~~:9] &#34;scripts/linux/wait-for-cloud-init.sh&#34;, &#34;scripts/linux/cleanup-subiquity.sh&#34;, &#34;scripts/linux/install-ca-certs.sh&#34;, &#34;scripts/linux/disable-multipathd.sh&#34;, &#34;scripts/linux/prune-motd.sh&#34;, &#34;scripts/linux/persist-cloud-init-net.sh&#34;, &#34;scripts/linux/configure-pam_mkhomedir.sh&#34;, &#34;scripts/linux/update-packages.sh&#34; ] pre_final_scripts = [ # [tl! ~~:6] &#34;scripts/linux/cleanup-cloud-init.sh&#34;, &#34;scripts/linux/cleanup-packages.sh&#34;, &#34;builds/linux/ubuntu/22-04-lts/hardening.sh&#34;, &#34;scripts/linux/zero-disk.sh&#34;, &#34;scripts/linux/generalize.sh&#34; ] As you can see, this sets up a lot of the properties which aren't strictly environment-specific, like:
partition sizes (ll. 14-23), virtual hardware settings (ll. 26-34), the hash and URL for the installer ISO (ll. 37-40), the command to be run at first boot to start the installer in unattended mode (ll. 47-53), a list of packages to install during the cloud-init install phase, primarily the sort that might be needed during later steps (ll. 62-67), a list of scripts to execute after cloud-init (ll. 71-78), and a list of scripts to run at the very end of the process (ll. 82-86). We'll look at the specifics of those scripts shortly, but first...
Packer Build File Let's explore the Packer build file, linux-server.pkr.hcl, which is the set of instructions used by Packer for performing the deployment. It's what ties everything else together.
This one is kind of complex, so we'll take it a block or two at a time.
It starts by setting the required minimum version of Packer and identifying what plugins (and versions) will be used to perform the build. I'm using the Packer plugin for Proxmox↗ for executing the build on Proxmox, and the Packer SSH key plugin↗ to simplify handling of SSH keys (we'll see how in the next block).
# torchlight! {&#34;lineNumbers&#34;:true} /* # Ubuntu Server 22.04 LTS template using the Packer Builder for Proxmox. */ // BLOCK: packer // The Packer configuration. packer { required_version = &#34;&gt;= 1.9.4&#34; # [tl! ~~] required_plugins { proxmox = { # [tl! ~~:2] version = &#34;&gt;= 1.1.8&#34; source = &#34;github.com/hashicorp/proxmox&#34; } ssh-key = { # [tl! ~~:2] version = &#34;= 1.0.3&#34; source = &#34;github.com/ivoronin/sshkey&#34; } } } This bit creates the sshkey data resource which uses the SSH plugin to generate a new SSH keypair to be used during the build process:
# torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;:22} // BLOCK: locals // Defines the local variables. // Dynamically-generated SSH key data &#34;sshkey&#34; &#34;install&#34; { # [tl! ~~:2] type = &#34;ed25519&#34; name = &#34;packer_key&#34; } This first set of locals {} blocks take advantage of the dynamic nature of local variables. They call the vault function↗ to retrieve secrets from Vault and hold them as local variables. It's broken into a section for &quot;standard&quot; variables, which just hold configuration information like URLs and usernames, and one for &quot;sensitive&quot; variables like passwords and API tokens. The sensitive ones get sensitive = true to make sure they won't be printed in the logs anywhere.
# torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;:31} ////////////////// Vault Locals ////////////////// // To retrieve secrets from Vault, the following environment variables MUST be defined: // - VAULT_ADDR : base URL of the Vault server (&#39;https://vault.example.com/&#39;) // - VAULT_TOKEN : token ID with rights to read/list // // Syntax for the vault() call: // vault(&#34;SECRET_ENGINE/data/SECRET_NAME&#34;, &#34;KEY&#34;) // // Standard configuration values: locals { # [tl! ~~:10] build_public_key = vault(&#34;packer/data/linux&#34;, &#34;public_key&#34;) // SSH public key for the default admin account build_username = vault(&#34;packer/data/linux&#34;, &#34;username&#34;) // Username for the default admin account proxmox_url = vault(&#34;packer/data/proxmox&#34;, &#34;api_url&#34;) // Proxmox API URL proxmox_insecure_connection = vault(&#34;packer/data/proxmox&#34;, &#34;insecure_connection&#34;) // Allow insecure connections to Proxmox proxmox_node = vault(&#34;packer/data/proxmox&#34;, &#34;node&#34;) // Proxmox node to use for the build proxmox_token_id = vault(&#34;packer/data/proxmox&#34;, &#34;token_id&#34;) // Proxmox token ID proxmox_iso_path = vault(&#34;packer/data/proxmox&#34;, &#34;iso_path&#34;) // Path to the ISO storage proxmox_vm_storage_pool = vault(&#34;packer/data/proxmox&#34;, &#34;vm_storage_pool&#34;) // Proxmox storage pool to use for the build proxmox_iso_storage_pool = vault(&#34;packer/data/proxmox&#34;, &#34;iso_storage_pool&#34;) // Proxmox storage pool to use for the ISO proxmox_network_bridge = vault(&#34;packer/data/proxmox&#34;, &#34;network_bridge&#34;) // Proxmox network bridge to use for the build } // Sensitive values: local &#34;bootloader_password&#34;{ # [tl! ~~10] expression = vault(&#34;packer/data/linux&#34;, &#34;bootloader_password&#34;) // Password to set for the bootloader sensitive = true } local &#34;build_password_hash&#34; { expression = vault(&#34;packer/data/linux&#34;, &#34;password_hash&#34;) // Password hash for the default admin account sensitive = true } local &#34;proxmox_token_secret&#34; { expression = vault(&#34;packer/data/proxmox&#34;, &#34;token_secret&#34;) // Token secret for authenticating to Proxmox sensitive = true } ////////////////// End Vault Locals ////////////////// And the next locals {} block leverages other expressions to:
dynamically set local.build_date to the current time (l. 70), combine individual string variables, like local.iso_checksum and local.iso_path (ll. 73-74), capture the keypair generated by the SSH key plugin (ll. 75-76), and use the templatefile() function↗ to process the cloud-init config file and insert appropriate variables (ll. 77-100) # torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;:69} locals { build_date = formatdate(&#34;YYYY-MM-DD hh:mm ZZZ&#34;, timestamp()) # [tl! ~~] build_description = &#34;Ubuntu Server 22.04 LTS template\\nBuild date: \${local.build_date}\\nBuild tool: \${local.build_tool}&#34; build_tool = &#34;HashiCorp Packer \${packer.version}&#34; iso_checksum = &#34;\${var.iso_checksum_type}:\${var.iso_checksum_value}&#34; # [tl! ~~:1] iso_path = &#34;\${local.proxmox_iso_path}/\${var.iso_file}&#34; ssh_private_key_file = data.sshkey.install.private_key_path # [tl! ~~:1] ssh_public_key = data.sshkey.install.public_key data_source_content = { # [tl! ~~:23] &#34;/meta-data&#34; = file(&#34;\${abspath(path.root)}/data/meta-data&#34;) &#34;/user-data&#34; = templatefile(&#34;\${abspath(path.root)}/data/user-data.pkrtpl.hcl&#34;, { apt_mirror = var.cloud_init_apt_mirror apt_packages = var.cloud_init_apt_packages build_password_hash = local.build_password_hash build_username = local.build_username ssh_keys = concat([local.ssh_public_key], [local.build_public_key]) vm_guest_os_hostname = var.vm_name vm_guest_os_keyboard = var.vm_guest_os_keyboard vm_guest_os_language = var.vm_guest_os_language vm_guest_os_timezone = var.vm_guest_os_timezone vm_guest_part_audit = var.vm_guest_part_audit vm_guest_part_boot = var.vm_guest_part_boot vm_guest_part_efi = var.vm_guest_part_efi vm_guest_part_home = var.vm_guest_part_home vm_guest_part_log = var.vm_guest_part_log vm_guest_part_root = var.vm_guest_part_root vm_guest_part_swap = var.vm_guest_part_swap vm_guest_part_tmp = var.vm_guest_part_tmp vm_guest_part_var = var.vm_guest_part_var vm_guest_part_vartmp = var.vm_guest_part_vartmp }) } } The source {} block is where we get to the meat of the operation; it handles the actual creation of the virtual machine. This matches the input and local variables to the Packer options that tell it:
how to connect and authenticate to the Proxmox host (ll. 110-113, 116), what virtual hardware settings the VM should have (ll. 119-141), that local.data_source_content (which contains the rendered cloud-init configuration - we'll look at that in a moment) should be mounted as a virtual CD-ROM device (ll. 144-149), to download and verify the installer ISO from var.iso_url, save it to local.proxmox_iso_storage_pool, and mount it as the primary CD-ROM device (ll. 150-155), what command to run at boot to start the install process (l. 159), and how to communicate with the VM once the install is underway (ll. 163-168). # torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;:104} // BLOCK: source // Defines the builder configuration blocks. source &#34;proxmox-iso&#34; &#34;linux-server&#34; { // Proxmox Endpoint Settings and Credentials insecure_skip_tls_verify = local.proxmox_insecure_connection # [tl! ~~:3] proxmox_url = local.proxmox_url token = local.proxmox_token_secret username = local.proxmox_token_id // Node Settings node = local.proxmox_node # [tl! ~~] // Virtual Machine Settings bios = &#34;ovmf&#34; # [tl! ~~:start] cores = var.vm_cpu_cores cpu_type = var.vm_cpu_type memory = var.vm_mem_size os = var.vm_guest_os_type scsi_controller = var.vm_scsi_controller sockets = var.vm_cpu_count template_description = local.build_description template_name = var.vm_name vm_name = var.vm_name disks { disk_size = var.vm_disk_size storage_pool = local.proxmox_vm_storage_pool } efi_config { efi_storage_pool = local.proxmox_vm_storage_pool efi_type = &#34;4m&#34; pre_enrolled_keys = true } network_adapters { bridge = local.proxmox_network_bridge model = var.vm_network_model } # [tl! ~~:end] // Removable Media Settings additional_iso_files { # [tl! ~~:5] cd_content = local.data_source_content cd_label = var.cd_label iso_storage_pool = local.proxmox_iso_storage_pool unmount = var.remove_cdrom } iso_checksum = local.iso_checksum # [tl! ~~:5] // iso_file = local.iso_path iso_url = var.iso_url iso_download_pve = true iso_storage_pool = local.proxmox_iso_storage_pool unmount_iso = var.remove_cdrom // Boot and Provisioning Settings boot_command = var.vm_boot_command # [tl! ~~] boot_wait = var.vm_boot_wait // Communicator Settings and Credentials communicator = &#34;ssh&#34; # [tl! ~~:5] ssh_clear_authorized_keys = var.build_remove_keys ssh_port = var.communicator_port ssh_private_key_file = local.ssh_private_key_file ssh_timeout = var.communicator_timeout ssh_username = local.build_username } By this point, we've got a functional virtual machine running on the Proxmox host but there are still some additional tasks to perform before it can be converted to a template. That's where the build {} block comes in: it connects to the VM and runs a few provisioner steps:
The file provisioner is used to copy any certificate files into the VM at /tmp (ll. 181-182) and to copy the join-domain.sh script↗ into the initial user's home directory (ll. 186-187). The first shell provisioner loops through and executes all the scripts listed in var.post_install_scripts (ll. 191-193). The last script in that list (update-packages.sh) finishes with a reboot for good measure. The second shell provisioner (ll. 197-203) waits for 30 seconds for the reboot to complete before it picks up with the remainder of the scripts, and it passes in the bootloader password for use by the hardening script. # torchlight! {&#34;lineNumbers&#34;:true, &#34;lineNumbersStart&#34;:171} // BLOCK: build // Defines the builders to run, provisioners, and post-processors. build { sources = [ &#34;source.proxmox-iso.linux-server&#34; ] provisioner &#34;file&#34; { source = &#34;certs&#34; # [tl! ~~:1] destination = &#34;/tmp&#34; } provisioner &#34;file&#34; { source = &#34;scripts/linux/join-domain.sh&#34; # [tl! ~~:1] destination = &#34;/home/\${local.build_username}/join-domain.sh&#34; } provisioner &#34;shell&#34; { execute_command = &#34;bash {{ .Path }}&#34; # [tl! ~~:2] expect_disconnect = true scripts = formatlist(&#34;\${path.cwd}/%s&#34;, var.post_install_scripts) } provisioner &#34;shell&#34; { env = { # [tl! ~~:6] &#34;BOOTLOADER_PASSWORD&#34; = local.bootloader_password } execute_command = &#34;{{ .Vars }} bash {{ .Path }}&#34; expect_disconnect = true pause_before = &#34;30s&#34; scripts = formatlist(&#34;\${path.cwd}/%s&#34;, var.pre_final_scripts) } } cloud-init Config Now let's back up a bit and drill into that cloud-init template file, builds/linux/ubuntu/22-04-lts/data/user-data.pkrtpl.hcl, which is loaded during the source {} block to tell the OS installer how to configure things on the initial boot.
The file follows the basic YAML-based syntax of a standard cloud config file↗, but with some HCL templating↗ to pull in certain values from elsewhere.
Some of the key tasks handled by this configuration include:
stopping the SSH server (l. 10), setting the hostname (l. 12), inserting username and password (ll. 13-14), enabling (temporary) passwordless-sudo (ll. 17-18), installing a templated list of packages (ll. 30-35), inserting a templated list of SSH public keys (ll. 39-44), installing all package updates, disabling root logins, and setting the timezone (ll. 206-208) and other needful things like setting up drive partitioning. cloud-init will reboot the VM once it completes, and when it comes back online it will have a DHCP-issued IP address and the accounts/credentials needed for Packer to log in via SSH and continue the setup in the build {} block.
# torchlight! {&#34;lineNumbers&#34;:true} #cloud-config autoinstall: %{ if length( apt_mirror ) &gt; 0 ~} apt: primary: - arches: [default] uri: &#34;\${ apt_mirror }&#34; %{ endif ~} early-commands: # [tl! **:5] - sudo systemctl stop ssh # [tl! ~~] identity: hostname: \${ vm_guest_os_hostname } # [tl! ~~:2] password: &#39;\${ build_password_hash }&#39; username: \${ build_username } keyboard: layout: \${ vm_guest_os_keyboard } late-commands: # [tl! **:2] - echo &#34;\${ build_username } ALL=(ALL) NOPASSWD:ALL&#34; &gt; /target/etc/sudoers.d/\${ build_username } # [tl! ~~:1] - curtin in-target --target=/target -- chmod 400 /etc/sudoers.d/\${ build_username } locale: \${ vm_guest_os_language } network: # [tl! collapse:9] network: version: 2 ethernets: mainif: match: name: e* critical: true dhcp4: true dhcp-identifier: mac %{ if length( apt_packages ) &gt; 0 ~} # [tl! **:5] packages: %{ for package in apt_packages ~} # [tl! ~~:2] - \${ package } %{ endfor ~} %{ endif ~} ssh: install-server: true allow-pw: true %{ if length( ssh_keys ) &gt; 0 ~} # [tl! **:5] authorized-keys: %{ for ssh_key in ssh_keys ~} # [tl! ~~2] - \${ ssh_key } %{ endfor ~} %{ endif ~} storage: config: # [tl! collapse:start] - ptable: gpt path: /dev/sda wipe: superblock type: disk id: disk-sda - device: disk-sda size: \${ vm_guest_part_efi }M wipe: superblock flag: boot number: 1 grub_device: true type: partition id: partition-0 - fstype: fat32 volume: partition-0 label: EFIFS type: format id: format-efi - device: disk-sda size: \${ vm_guest_part_boot }M wipe: superblock number: 2 type: partition id: partition-1 - fstype: xfs volume: partition-1 label: BOOTFS type: format id: format-boot - device: disk-sda size: -1 wipe: superblock number: 3 type: partition id: partition-2 - name: sysvg devices: - partition-2 type: lvm_volgroup id: lvm_volgroup-0 - name: home volgroup: lvm_volgroup-0 size: \${ vm_guest_part_home}M wipe: superblock type: lvm_partition id: lvm_partition-home - fstype: xfs volume: lvm_partition-home type: format label: HOMEFS id: format-home - name: tmp volgroup: lvm_volgroup-0 size: \${ vm_guest_part_tmp }M wipe: superblock type: lvm_partition id: lvm_partition-tmp - fstype: xfs volume: lvm_partition-tmp type: format label: TMPFS id: format-tmp - name: var volgroup: lvm_volgroup-0 size: \${ vm_guest_part_var }M wipe: superblock type: lvm_partition id: lvm_partition-var - fstype: xfs volume: lvm_partition-var type: format label: VARFS id: format-var - name: log volgroup: lvm_volgroup-0 size: \${ vm_guest_part_log }M wipe: superblock type: lvm_partition id: lvm_partition-log - fstype: xfs volume: lvm_partition-log type: format label: LOGFS id: format-log - name: audit volgroup: lvm_volgroup-0 size: \${ vm_guest_part_audit }M wipe: superblock type: lvm_partition id: lvm_partition-audit - fstype: xfs volume: lvm_partition-audit type: format label: AUDITFS id: format-audit - name: vartmp volgroup: lvm_volgroup-0 size: \${ vm_guest_part_vartmp }M wipe: superblock type: lvm_partition id: lvm_partition-vartmp - fstype: xfs volume: lvm_partition-vartmp type: format label: VARTMPFS id: format-vartmp - name: root volgroup: lvm_volgroup-0 %{ if vm_guest_part_root == 0 ~} size: -1 %{ else ~} size: \${ vm_guest_part_root }M %{ endif ~} wipe: superblock type: lvm_partition id: lvm_partition-root - fstype: xfs volume: lvm_partition-root type: format label: ROOTFS id: format-root - path: / device: format-root type: mount id: mount-root - path: /boot device: format-boot type: mount id: mount-boot - path: /boot/efi device: format-efi type: mount id: mount-efi - path: /home device: format-home type: mount id: mount-home - path: /tmp device: format-tmp type: mount id: mount-tmp - path: /var device: format-var type: mount id: mount-var - path: /var/log device: format-log type: mount id: mount-log - path: /var/log/audit device: format-audit type: mount id: mount-audit - path: /var/tmp device: format-vartmp type: mount id: mount-vartmp # [tl! collapse:end] user-data: # [tl! **:3] package_upgrade: true # [tl! ~~:2] disable_root: true timezone: \${ vm_guest_os_timezone } version: 1 Setup Scripts After the cloud-init setup is completed, Packer control gets passed to the build {} block and the provisioners there run through a series of scripts to perform additional configuration of the guest OS. I split the scripts into two sets, which I called post_install_scripts and pre_final_scripts, with a reboot that happens in between them.
Post Install The post-install scripts run after the cloud-init installation has completed, and (depending on the exact Linux flavor) may include:
wait-for-cloud-init.sh, which just checks to confirm that cloud-init has truly finished before proceeding: # torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # waits for cloud-init to finish before proceeding set -eu echo &#39;&gt;&gt; Waiting for cloud-init...&#39; while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 1 done cleanup-subiquity.sh to remove the default network configuration generated by the Ubuntu installer: # torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # cleans up cloud-init config from subiquity set -eu if [ -f /etc/cloud/cloud.cfg.d/99-installer.cfg ]; then sudo rm /etc/cloud/cloud.cfg.d/99-installer.cfg echo &#39;&gt;&gt; Deleting subiquity cloud-init config...&#39; fi if [ -f /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg ]; then sudo rm /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg echo &#39;&gt;&gt; Deleting subiquity cloud-init network config...&#39; fi install-ca-certs.sh to install any trusted CA certs which were in the certs/ folder of the Packer environment and copied to /tmp/certs/ in the guest: # torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # installs trusted CA certs from /tmp/certs/ set -eu if awk -F= &#39;/^ID/{print $2}&#39; /etc/os-release | grep -q debian; then echo &#39;&gt;&gt; Installing certificates...&#39; if ls /tmp/certs/*.cer &gt;/dev/null 2&gt;&amp;1; then sudo cp /tmp/certs/* /usr/local/share/ca-certificates/ cd /usr/local/share/ca-certificates/ for file in *.cer; do sudo mv -- &#34;$file&#34; &#34;\${file%.cer}.crt&#34; done sudo /usr/sbin/update-ca-certificates else echo &#39;No certs to install.&#39; fi elif awk -F= &#39;/^ID/{print $2}&#39; /etc/os-release | grep -q rhel; then echo &#39;&gt;&gt; Installing certificates...&#39; if ls /tmp/certs/*.cer &gt;/dev/null 2&gt;&amp;1; then sudo cp /tmp/certs/* /etc/pki/ca-trust/source/anchors/ cd /etc/pki/ca-trust/source/anchors/ for file in *.cer; do sudo mv -- &#34;$file&#34; &#34;\${file%.cer}.crt&#34; done sudo /bin/update-ca-trust else echo &#39;No certs to install.&#39; fi fi disable-multipathd.sh to, uh, disable multipathd to keep things lightweight and simple: # torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # disables multipathd set -eu echo &#39;&gt;&gt; Disabling multipathd...&#39; sudo systemctl disable multipathd prune-motd.sh to remove those noisy, promotional default messages that tell you to enable cockpit or check out Ubuntu Pro or whatever: # torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # prunes default noisy MOTD set -eu echo &#39;&gt;&gt; Pruning default MOTD...&#39; if awk -F= &#39;/^ID/{print $2}&#39; /etc/os-release | grep -q rhel; then if [ -L &#34;/etc/motd.d/insights-client&#34; ]; then sudo unlink /etc/motd.d/insights-client fi elif awk -F= &#39;/^ID/{print $2}&#39; /etc/os-release | grep -q debian; then sudo chmod -x /etc/update-motd.d/91-release-upgrade fi persist-cloud-init-net.sh to ensure the cloud-init cache isn't wiped on reboot so the network settings will stick: # torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # ensures network settings are preserved on boot set -eu echo &#39;&gt;&gt; Preserving network settings...&#39; if grep -q &#39;manual_cache_clean&#39; /etc/cloud/cloud.cfg; then sudo sed -i &#39;s/^manual_cache_clean.*$/manual_cache_clean: True/&#39; /etc/cloud/cloud.cfg else echo &#39;manual_cache_clean: True&#39; | sudo tee -a /etc/cloud/cloud.cfg fi configure-pam_mkhomedir.sh to configure the pam_mkhomedir module to create user homedirs with the appropriate (750) permission set: # torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # configures pam_mkhomedir to create home directories with 750 permissions set -eu echo &#39;&gt;&gt; Configuring pam_mkhomedir...&#39; sudo sed -i &#39;s/optional.*pam_mkhomedir.so/required\\t\\tpam_mkhomedir.so umask=0027/&#39; /usr/share/pam-configs/mkhomedir update-packages.sh to install any available package updates and reboot: # torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # updates packages and reboots set -eu if awk -F= &#39;/^ID/{print $2}&#39; /etc/os-release | grep -q rhel; then if which dnf &amp;&gt;/dev/null; then echo &#39;&gt;&gt; Checking for and installing updates...&#39; sudo dnf -y update else echo &#39;&gt;&gt; Checking for and installing updates...&#39; sudo yum -y update fi echo &#39;&gt;&gt; Rebooting!&#39; sudo reboot elif awk -F= &#39;/^ID/{print $2}&#39; /etc/os-release | grep -q debian; then echo &#39;&gt;&gt; Checking for and installing updates...&#39; sudo apt-get update &amp;&amp; sudo apt-get -y upgrade echo &#39;&gt;&gt; Rebooting!&#39; sudo reboot fi After the reboot, the process picks back up with the pre-final scripts.
Pre-Final cleanup-cloud-init.sh performs a clean↗ action to get the template ready to be re-used: # torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # cleans up cloud-init state set -eu echo &#39;&gt;&gt; Cleaning up cloud-init state...&#39; sudo cloud-init clean -l cleanup-packages.sh uninstalls packages and kernel versions which are no longer needed: # torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # cleans up unneeded packages to reduce the size of the image set -eu if awk -F= &#39;/^ID/{print $2}&#39; /etc/os-release | grep -q debian; then echo &#39;&gt;&gt; Cleaning up unneeded packages...&#39; sudo apt-get -y autoremove --purge sudo apt-get -y clean elif awk -F= &#39;/^ID/{print $2}&#39; /etc/os-release | grep -q rhel; then if which dnf &amp;&gt;/dev/null; then echo &#39;&gt;&gt; Cleaning up unneeded packages...&#39; sudo dnf -y remove linux-firmware sudo dnf -y remove &#34;$(dnf repoquery --installonly --latest-limit=-1 -q)&#34; sudo dnf -y autoremove sudo dnf -y clean all --enablerepo=\\*; else echo &#39;&gt;&gt; Cleaning up unneeded packages...&#39; sudo yum -y remove linux-firmware sudo package-cleanup --oldkernels --count=1 sudo yum -y autoremove sudo yum -y clean all --enablerepo=\\*; fi fi build/linux/22-04-lts/hardening.sh is a build-specific script to perform basic hardening tasks toward the CIS Level 2 server benchmark. It doesn't have a lot of fancy logic because it is only intended to be run during this package process when it's making modifications from a known state. It's long, so I won't repost it here, and I may end up writing a separate post specifically about this hardening process, but you're welcome to view the full script for Ubuntu 22.04 here↗. zero-disk.sh fills a file with zeroes until the disk runs out of space, and then removes it, resulting in a reduced template image size: # torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # zeroes out free space to reduce disk size set -eu echo &#39;&gt;&gt; Zeroing free space to reduce disk size...&#39; sudo sh -c &#39;dd if=/dev/zero of=/EMPTY bs=1M || true; sync; sleep 1; sync&#39; sudo sh -c &#39;rm -f /EMPTY; sync; sleep 1; sync&#39; generalize.sh performs final steps to get the template ready for cloning, including removing the sudoers.d configuration which allowed passwordless elevation during the setup: # torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # prepare a VM to become a template. set -eu echo &#39;&gt;&gt; Clearing audit logs...&#39; sudo sh -c &#39;if [ -f /var/log/audit/audit.log ]; then cat /dev/null &gt; /var/log/audit/audit.log fi&#39; sudo sh -c &#39;if [ -f /var/log/wtmp ]; then cat /dev/null &gt; /var/log/wtmp fi&#39; sudo sh -c &#39;if [ -f /var/log/lastlog ]; then cat /dev/null &gt; /var/log/lastlog fi&#39; sudo sh -c &#39;if [ -f /etc/logrotate.conf ]; then logrotate -f /etc/logrotate.conf 2&gt;/dev/null fi&#39; sudo rm -rf /var/log/journal/* sudo rm -f /var/lib/dhcp/* sudo find /var/log -type f -delete echo &#39;&gt;&gt; Clearing persistent udev rules...&#39; sudo sh -c &#39;if [ -f /etc/udev/rules.d/70-persistent-net.rules ]; then rm /etc/udev/rules.d/70-persistent-net.rules fi&#39; # check for only RHEL releases if awk -F= &#39;/^ID=/{print $2}&#39; /etc/os-release | grep -q rhel; then echo &#39;&gt;&gt; Clearing RHSM subscription...&#39; sudo subscription-manager unregister sudo subscription-manager clean fi echo &#39;&gt;&gt; Clearing temp dirs...&#39; sudo rm -rf /tmp/* sudo rm -rf /var/tmp/* # check for RHEL-like releases (RHEL and Rocky) if awk -F= &#39;/^ID/{print $2}&#39; /etc/os-release | grep -q rhel; then sudo rm -rf /var/cache/dnf/* sudo rm -rf /var/log/rhsm/* fi echo &#39;&gt;&gt; Clearing host keys...&#39; sudo rm -f /etc/ssh/ssh_host_* echo &#39;&gt;&gt; Removing Packer SSH key...&#39; sed -i &#39;/packer_key/d&#39; ~/.ssh/authorized_keys echo &#39;&gt;&gt; Clearing machine-id...&#39; sudo truncate -s 0 /etc/machine-id if [ -f /var/lib/dbus/machine-id ]; then sudo rm -f /var/lib/dbus/machine-id sudo ln -s /etc/machine-id /var/lib/dbus/machine-id fi echo &#39;&gt;&gt; Clearing shell history...&#39; unset HISTFILE history -cw echo &gt; ~/.bash_history sudo rm -f /root/.bash_history echo &#39;&gt;&gt; Clearing sudoers.d...&#39; sudo rm -f /etc/sudoers.d/* Packer Build At this point, I should (in theory) be able to kick off the build from my laptop with a Packer command - but first, I'll need to set up some environment variables so that Packer will be able to communicate with my Vault server:
export VAULT_ADDR=&#34;https://vault.tailnet-name.ts.net/&#34; # [tl! .cmd:1] export VAULT_TOKEN=&#34;hvs.CAES[...]GSFQ&#34; Okay, now I can run the Ubuntu 22.04 build from the top-level of my Packer directory like so:
packer init builds/linux/ubuntu/22-04-lts # [tl! .cmd:1] packer build -on-error=cleanup -force builds/linux/ubuntu/22-04-lts proxmox-iso.linux-server: output will be in this color. # [tl! .nocopy:start] ==&gt; proxmox-iso.linux-server: Creating CD disk... # [tl! collapse:15] proxmox-iso.linux-server: xorriso 1.5.6 : RockRidge filesystem manipulator, libburnia project. proxmox-iso.linux-server: xorriso : NOTE : Environment variable SOURCE_DATE_EPOCH encountered with value 315532800 proxmox-iso.linux-server: Drive current: -outdev &#39;stdio:/tmp/packer684761677.iso&#39; proxmox-iso.linux-server: Media current: stdio file, overwriteable proxmox-iso.linux-server: Media status : is blank proxmox-iso.linux-server: Media summary: 0 sessions, 0 data blocks, 0 data, 174g free proxmox-iso.linux-server: xorriso : WARNING : -volid text does not comply to ISO 9660 / ECMA 119 rules proxmox-iso.linux-server: Added to ISO image: directory &#39;/&#39;=&#39;/tmp/packer_to_cdrom2909484587&#39; proxmox-iso.linux-server: xorriso : UPDATE : 2 files added in 1 seconds proxmox-iso.linux-server: xorriso : UPDATE : 2 files added in 1 seconds proxmox-iso.linux-server: ISO image produced: 186 sectors proxmox-iso.linux-server: Written to medium : 186 sectors at LBA 0 proxmox-iso.linux-server: Writing to &#39;stdio:/tmp/packer684761677.iso&#39; completed successfully. proxmox-iso.linux-server: Done copying paths from CD_dirs proxmox-iso.linux-server: Uploaded ISO to local:iso/packer684761677.iso ==&gt; proxmox-iso.linux-server: Force set, checking for existing artifact on PVE cluster ==&gt; proxmox-iso.linux-server: No existing artifact found ==&gt; proxmox-iso.linux-server: Creating VM ==&gt; proxmox-iso.linux-server: No VM ID given, getting next free from Proxmox ==&gt; proxmox-iso.linux-server: Starting VM ==&gt; proxmox-iso.linux-server: Waiting 4s for boot ==&gt; proxmox-iso.linux-server: Typing the boot command ==&gt; proxmox-iso.linux-server: Waiting for SSH to become available... # [tl! .nocopy:end] It'll take a few minutes while Packer waits on SSH, and while I wait on that, I can look at the Proxmox console for the VM to follow along with the installer's progress:
That successful SSH connection signifies the transition from the source {} block to the build {} block, so it starts with uploading any certs and the join-domain.sh script before getting into running those post-install tasks:
==&gt; proxmox-iso.linux-server: Connected to SSH! [tl! .nocopy:start **:2] ==&gt; proxmox-iso.linux-server: Uploading certs =&gt; /tmp ==&gt; proxmox-iso.linux-server: Uploading scripts/linux/join-domain.sh =&gt; /home/john/join-domain.sh proxmox-iso.linux-server: join-domain.sh 5.59 KiB / 5.59 KiB [========================================================================================================] 100.00% 0s ==&gt; proxmox-iso.linux-server: Provisioning with shell script: /home/john/projects/packer-proxmox-templates/scripts/linux/wait-for-cloud-init.sh [tl! **:start] proxmox-iso.linux-server: &gt;&gt; Waiting for cloud-init... ==&gt; proxmox-iso.linux-server: Provisioning with shell script: /home/john/projects/packer-proxmox-templates/scripts/linux/cleanup-subiquity.sh proxmox-iso.linux-server: &gt;&gt; Deleting subiquity cloud-init config... proxmox-iso.linux-server: &gt;&gt; Deleting subiquity cloud-init network config... ==&gt; proxmox-iso.linux-server: Provisioning with shell script: /home/john/projects/packer-proxmox-templates/scripts/linux/install-ca-certs.sh proxmox-iso.linux-server: &gt;&gt; Installing certificates... proxmox-iso.linux-server: No certs to install. ==&gt; proxmox-iso.linux-server: Provisioning with shell script: /home/john/projects/packer-proxmox-templates/scripts/linux/disable-multipathd.sh proxmox-iso.linux-server: &gt;&gt; Disabling multipathd... [tl! **:end] ==&gt; proxmox-iso.linux-server: Removed /etc/systemd/system/multipath-tools.service. ==&gt; proxmox-iso.linux-server: Removed /etc/systemd/system/sockets.target.wants/multipathd.socket. ==&gt; proxmox-iso.linux-server: Removed /etc/systemd/system/sysinit.target.wants/multipathd.service. ==&gt; proxmox-iso.linux-server: Provisioning with shell script: /home/john/projects/packer-proxmox-templates/scripts/linux/prune-motd.sh [tl! **:3] proxmox-iso.linux-server: &gt;&gt; Pruning default MOTD... ==&gt; proxmox-iso.linux-server: Provisioning with shell script: /home/john/projects/packer-proxmox-templates/scripts/linux/persist-cloud-init-net.sh proxmox-iso.linux-server: &gt;&gt; Preserving network settings... proxmox-iso.linux-server: manual_cache_clean: True ==&gt; proxmox-iso.linux-server: Provisioning with shell script: /home/john/projects/packer-proxmox-templates/scripts/linux/configure-pam_mkhomedir.sh [tl! **:3] proxmox-iso.linux-server: &gt;&gt; Configuring pam_mkhomedir... ==&gt; proxmox-iso.linux-server: Provisioning with shell script: /home/john/projects/packer-proxmox-templates/scripts/linux/update-packages.sh proxmox-iso.linux-server: &gt;&gt; Checking for and installing updates... proxmox-iso.linux-server: Hit:1 http://security.ubuntu.com/ubuntu jammy-security InRelease proxmox-iso.linux-server: Hit:2 http://us.archive.ubuntu.com/ubuntu jammy InRelease proxmox-iso.linux-server: Hit:3 http://us.archive.ubuntu.com/ubuntu jammy-updates InRelease proxmox-iso.linux-server: Hit:4 http://us.archive.ubuntu.com/ubuntu jammy-backports InRelease proxmox-iso.linux-server: Reading package lists... proxmox-iso.linux-server: Reading package lists... proxmox-iso.linux-server: Building dependency tree... proxmox-iso.linux-server: Reading state information... proxmox-iso.linux-server: Calculating upgrade... proxmox-iso.linux-server: The following packages have been kept back: proxmox-iso.linux-server: python3-update-manager update-manager-core proxmox-iso.linux-server: 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. proxmox-iso.linux-server: &gt;&gt; Rebooting! [tl! ** .nocopy:end] There's a brief pause during the reboot, and then things pick back up with the hardening script and then the cleanup tasks:
==&gt; proxmox-iso.linux-server: Pausing 30s before the next provisioner... [tl! .nocopy:start] ==&gt; proxmox-iso.linux-server: Provisioning with shell script: /home/john/projects/packer-proxmox-templates/scripts/linux/cleanup-cloud-init.sh [tl! **:3] proxmox-iso.linux-server: &gt;&gt; Cleaning up cloud-init state... ==&gt; proxmox-iso.linux-server: Provisioning with shell script: /home/john/projects/packer-proxmox-templates/scripts/linux/cleanup-packages.sh proxmox-iso.linux-server: &gt;&gt; Cleaning up unneeded packages... proxmox-iso.linux-server: Reading package lists... proxmox-iso.linux-server: Building dependency tree... proxmox-iso.linux-server: Reading state information... proxmox-iso.linux-server: 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. ==&gt; proxmox-iso.linux-server: Provisioning with shell script: /home/john/projects/packer-proxmox-templates/builds/linux/ubuntu/22-04-lts/hardening.sh [tl! **:1] proxmox-iso.linux-server: &gt;&gt;&gt; Beginning hardening tasks... proxmox-iso.linux-server: [...] proxmox-iso.linux-server: &gt;&gt;&gt; Hardening script complete! ==&gt; proxmox-iso.linux-server: Provisioning with shell script: /home/john/projects/packer-proxmox-templates/scripts/linux/zero-disk.sh [tl! **:1] proxmox-iso.linux-server: &gt;&gt; Zeroing free space to reduce disk size... ==&gt; proxmox-iso.linux-server: dd: error writing &#39;/EMPTY&#39;: No space left on device ==&gt; proxmox-iso.linux-server: 25905+0 records in ==&gt; proxmox-iso.linux-server: 25904+0 records out ==&gt; proxmox-iso.linux-server: 27162312704 bytes (27 GB, 25 GiB) copied, 10.7024 s, 2.5 GB/s ==&gt; proxmox-iso.linux-server: Provisioning with shell script: /home/john/projects/packer-proxmox-templates/scripts/linux/generalize.sh [tl! **:10] proxmox-iso.linux-server: &gt;&gt; Clearing audit logs... proxmox-iso.linux-server: &gt;&gt; Clearing persistent udev rules... proxmox-iso.linux-server: &gt;&gt; Clearing temp dirs... proxmox-iso.linux-server: &gt;&gt; Clearing host keys... proxmox-iso.linux-server: &gt;&gt; Removing Packer SSH key... proxmox-iso.linux-server: &gt;&gt; Clearing machine-id... proxmox-iso.linux-server: &gt;&gt; Clearing shell history... proxmox-iso.linux-server: &gt;&gt; Clearing sudoers.d... ==&gt; proxmox-iso.linux-server: Stopping VM ==&gt; proxmox-iso.linux-server: Converting VM to template proxmox-iso.linux-server: Deleted generated ISO from local:iso/packer152219352.iso Build &#39;proxmox-iso.linux-server&#39; finished after 10 minutes 52 seconds. [tl! **:5] ==&gt; Wait completed after 10 minutes 52 seconds ==&gt; Builds finished. The artifacts of successful builds are: --&gt; proxmox-iso.linux-server: A template was created: 105 [tl! .nocopy:end] That was a lot of prep work, but now that everything is in place it only takes about eleven minutes to create a fresh Ubuntu 22.04 template, and that template is fully up-to-date and hardened to about 95% of the CIS Level 2 benchmark. This will save me a lot of time as I build new VMs in my homelab.
Wrapper Script But having to export the Vault variables and run the Packer commands manually is a bit of a chore. So I put together a couple of helper scripts to help streamline things. This will really come in handy as I add new OS variants and schedule automated builds with GitHub Actions.
First, I made a vault-env.sh script to hold my Vault address and the token for Packer.
Sensitive Values!
The VAULT_TOKEN variable is a sensitive value and should be protected. This file should be added to .gitignore to ensure it doesn't get inadvertently committed to a repo.
# torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash set -eu export VAULT_ADDR=&#34;https://vault.tailnet-name.ts.net/&#34; export VAULT_TOKEN=&#34;hvs.CAES[...]GSFQ&#34; This build.sh script expects a single argument: the name of the build to create. It then checks to see if the VAULT_TOKEN environment variable is already set; if not, it tries to source it from vault-env.sh. And then it kicks off the appropriate build.
# torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # Run a single packer build # # Specify the build as an argument to the script. Ex: # ./build.sh ubuntu2204 set -eu if [ $# -ne 1 ]; then echo &#34;&#34;&#34; Syntax: $0 [BUILD] Where [BUILD] is one of the supported OS builds: ubuntu2204 ubuntu2404 &#34;&#34;&#34; exit 1 fi if [ ! &#34;\${VAULT_TOKEN+x}&#34; ]; then source vault-env.sh || ( echo &#34;No Vault config found&#34;; exit 1 ) fi build_name=&#34;\${1,,}&#34; build_path= case $build_name in ubuntu2204) build_path=&#34;builds/linux/ubuntu/22-04-lts/&#34; ;; ubuntu2404) build_path=&#34;builds/linux/ubuntu/24-04-lts/&#34; ;; *) echo &#34;Unknown build; exiting...&#34; exit 1 ;; esac packer init &#34;\${build_path}&#34; packer build -on-error=cleanup -force &#34;\${build_path}&#34; Then I can kick off a build with just:
./build.sh ubuntu2204 # [tl! .cmd] proxmox-iso.linux-server: output will be in this color. # [tl! .nocopy:6] ==&gt; proxmox-iso.linux-server: Creating CD disk... proxmox-iso.linux-server: xorriso 1.5.6 : RockRidge filesystem manipulator, libburnia project. proxmox-iso.linux-server: xorriso : NOTE : Environment variable SOURCE_DATE_EPOCH encountered with value 315532800 proxmox-iso.linux-server: Drive current: -outdev &#39;stdio:/tmp/packer2372067848.iso&#39; [...] Up Next... Being able to generate a template on-demand is pretty cool, but the next stage of this project is to integrate it with a GitHub Actions workflow so that the templates can be built automatically on a schedule or as the configuration gets changed. But this post is long enough (and I've been poking at it for long enough) so that explanation will have to wait for another time.
(If you'd like a sneak peek of what's in store, take a self-guided tour of the GitHub repo↗.)
Stay tuned! It's here! Automate Packer Builds with GitHub Actions
`,description:"Using Packer and Vault to build VM templates for my Proxmox homelab.",url:"https://runtimeterror.dev/building-proxmox-templates-packer/"},"https://runtimeterror.dev/kudos-with-cabin/":{title:"Kudos With Cabin",tags:["hugo","javascript","meta","selfhosting"],content:`I'm not one to really worry about page view metrics, but I do like to see which of my posts attract the most attention - and where that attention might be coming from. That insight has allowed me to find new blogs and sites that have linked to mine, and has tipped me off that maybe I should update that four-year-old post that's suddenly getting renewed traffic from Reddit.
In my quest for such knowledge, last week I switched my various web properties back to using Cabin↗ for &quot;privacy-first, carbon conscious web analytics&quot;. I really like how lightweight and deliberately minimal Cabin is, and the portal does a great job of presenting the information that I care about. With this change, though, I gave up the cute little upvote widgets provided by the previous analytics platform.
I recently shared on my Bear weblog↗ about how I was hijacking Bear's built-in upvote button to send a &quot;kudos&quot; event↗ to Cabin and tally those actions there.
Well today I implemented a similar thing on this blog. Without an existing widget to hijack, I needed to create this one from scratch using a combination of HTML in my page template, CSS to style it, and JavaScript to fire the event.
Layout My Hugo↗ setup uses layouts/_default/single.html to control how each post is rendered, and I've already got a section at the bottom which displays the &quot;Reply by email&quot; link if replies are permitted on that page:
# torchlight! {&#34;lineNumbers&#34;:true} &lt;div class=&#34;content__body&#34;&gt; &lt;!-- [tl! reindex(33))] --&gt; {{ .Content }} &lt;/div&gt; {{- $reply := true }} {{- if eq .Site.Params.reply false }} {{- $reply = false }} {{- else if eq .Params.reply false }} {{- $reply = false }} {{- end }} {{- if eq $reply true }} &lt;hr&gt; &lt;span class=&#34;post_email_reply&#34;&gt;&lt;a href=&#34;mailto:[email protected]?Subject=Re: {{ .Title }}&#34;&gt;📧 Reply by email&lt;/a&gt;&lt;/span&gt; {{- end }} I'll only want the upvote widget to appear on pages where replies are permitted so this makes a logical place to insert the new code:
# torchlight! {&#34;lineNumbers&#34;:true} &lt;div class=&#34;content__body&#34;&gt; &lt;!-- [tl! reindex(33)] --&gt; {{ .Content }} &lt;/div&gt; {{- $reply := true }} {{- if eq .Site.Params.reply false }} {{- $reply = false }} {{- else if eq .Params.reply false }} {{- $reply = false }} {{- end }} {{- if eq $reply true }} &lt;hr&gt; &lt;div class=&#34;kudos-container&#34;&gt; &lt;!-- [tl! ++:5 **:5] --&gt; &lt;button class=&#34;kudos-button&#34;&gt; &lt;span class=&#34;emoji&#34;&gt;👍&lt;/span&gt; &lt;/button&gt; &lt;span class=&#34;kudos-text&#34;&gt;Enjoyed this?&lt;/span&gt; &lt;/div&gt; &lt;span class=&#34;post_email_reply&#34;&gt;&lt;a href=&#34;mailto:[email protected]?Subject=Re: {{ .Title }}&#34;&gt;📧 Reply by email&lt;/a&gt;&lt;/span&gt; {{- end }} The button won't actually do anything yet, but it'll at least appear on the page. I can use some CSS to make it look a bit nicer though.
CSS My theme uses static/css/custom.css to override the defaults, so I'll drop some styling bits at the bottom of that file:
# torchlight! {&#34;lineNumbers&#34;:true} /* Cabin kudos styling [tl! reindex(406)] */ .kudos-container { display: flex; align-items: center; } .kudos-button { background: none; border: none; cursor: pointer; font-size: 1.2rem; padding: 0; margin-right: 0.25rem; } .kudos-button:disabled { cursor: default; } .kudos-button .emoji { display: inline-block; transition: transform 0.3s ease; } .kudos-button.clicked .emoji { transform: rotate(360deg); } .kudos-text { transition: font-style 0.3s ease; } .kudos-text.thanks { font-style: italic; } I got carried away a little bit and decided to add a fun animation when the button gets clicked. Which brings me to what happens when this thing gets clicked.
JavaScript I want the button to do a little bit more than just send the event to Cabin so I decided to break that out into a separate script, assets/js/kudos.js. This script will latch on to the kudos-related elements, and when the button gets clicked it will (1) fire off the cabin.event('kudos') function to record the event, (2) disable the button to discourage repeated clicks, (3) change the displayed text to Thanks!, and (4) celebrate the event by spinning the emoji and replacing it with a party popper.
// torchlight! {&#34;lineNumbers&#34;:true} // manipulates the post upvote &#34;kudos&#34; button behavior window.onload = function() { // get the button and text elements const kudosButton = document.querySelector(&#39;.kudos-button&#39;); const kudosText = document.querySelector(&#39;.kudos-text&#39;); const emojiSpan = kudosButton.querySelector(&#39;.emoji&#39;); kudosButton.addEventListener(&#39;click&#39;, function(event) { // send the event to Cabin cabin.event(&#39;kudos&#39;) // disable the button kudosButton.disabled = true; kudosButton.classList.add(&#39;clicked&#39;); // change the displayed text kudosText.textContent = &#39;Thanks!&#39;; kudosText.classList.add(&#39;thanks&#39;); // spin the emoji emojiSpan.style.transform = &#39;rotate(360deg)&#39;; // change the emoji to celebrate setTimeout(function() { emojiSpan.textContent = &#39;🎉&#39;; }, 150); // half of the css transition time for a smooth mid-rotation change }); } The last step is to go back to my single.html layout and pull in this new JavaScript file. I placed it in the site's assets/ folder so that Hugo can apply its minifying magic so I'll need to load it in as a page resource:
# torchlight! {&#34;lineNumbers&#34;:true} &lt;div class=&#34;content__body&#34;&gt; &lt;!-- [tl! reindex(33)] --&gt; {{ .Content }} &lt;/div&gt; {{- $reply := true }} &lt;!-- [tl! collapse:6] --&gt; {{- if eq .Site.Params.reply false }} {{- $reply = false }} {{- else if eq .Params.reply false }} {{- $reply = false }} {{- end }} {{- if eq $reply true }} &lt;hr&gt; &lt;div class=&#34;kudos-container&#34;&gt; &lt;button class=&#34;kudos-button&#34;&gt; &lt;span class=&#34;emoji&#34;&gt;👍&lt;/span&gt; &lt;/button&gt; &lt;span class=&#34;kudos-text&#34;&gt;Enjoyed this?&lt;/span&gt; &lt;/div&gt; {{ $kudos := resources.Get &#34;js/kudos.js&#34; | minify }} &lt;!-- [tl! ++:1 **:1] --&gt; &lt;script src=&#34;{{ $kudos.RelPermalink }}&#34;&gt;&lt;/script&gt; &lt;span class=&#34;post_email_reply&#34;&gt;&lt;a href=&#34;mailto:[email protected]?Subject=Re: {{ .Title }}&#34;&gt;📧 Reply by email&lt;/a&gt;&lt;/span&gt; {{- end }} You might have noticed that I'm not doing anything to display the upvote count on the page itself. I don't feel like the reader really needs to know how (un)popular a post may be before deciding to vote it up; the total count isn't really relevant. (Also, the Cabin stats don't update in realtime and I just didn't want to deal with that... but mostly that first bit.)
In any case, after clicking the 👍 button on a few pages I can see the kudos events recorded in my Cabin portal↗: Go on, try it out:
`,description:"Using Cabin's event tracking to add a simple post upvote widget to my Hugo site.",url:"https://runtimeterror.dev/kudos-with-cabin/"},"https://runtimeterror.dev/further-down-the-bunny-hole/":{title:"Further Down the Bunny Hole",tags:["bunny","cicd","hugo","meta","selfhosting"],content:`It wasn't too long ago (January, in fact) that I started hosting this site with Neocities. I was pretty pleased with that setup, but a few weeks ago my monitoring setup↗ started reporting that the site was down. And sure enough, trying to access the site would return a generic error message stating that the site was unknown. I eventually discovered that this was due to Neocities &quot;forgetting&quot; that the site was linked to the runtimeterror.dev domain. It was easy enough to just re-enter that domain in the configuration, and that immediately fixed things... until a few days later when the same thing happened again.
The same problem has now occurred five or six times, and my messages to the Neocities support contact have gone unanswered. I didn't see anyone else online reporting this exact issue, but I found several posts on Reddit about sites getting randomly broken (or even deleted!) and support taking a week (or more) to reply. I don't have that kind of patience, so I started to consider moving my content away from Neocities and cancelling my $5/month Supporter subscription.
I recently↗ started using bunny.net↗ for the site's DNS, and had also leveraged Bunny's CDN for hosting font files. This setup has been working great for me, and I realized that I could also use Bunny's CDN for hosting the entirety of my static site as well. After all, serving static files on the web is exactly what a CDN is great at. After an hour or two of tinkering, I successfully switched hosting setups with just a few seconds of downtime.
Here's how I did it.
Storage Zone I started by logging into Bunny's dashboard and creating a new storage zone to hold the files. I gave it a name (like my-storage-zone), selected the main storage region nearest to me (New York), and also enabled replication to a site in Europe and another in Asia for good measure. (That's going to cost me a whopping $0.025/GB.)
After the storage zone was created, I clicked over to the FTP &amp; API Access page to learn how upload files. I did some initial tests with an FTP client to confirm that it worked, but mostly I just made a note of the credentials since they'll be useful later.
Pull Zone To get files out of the storage zone and onto the web site, I had to also configure a pull zone. While looking at my new storage, I clicked Connected Pull Zones and then the +Connect Pull Zone button at the top right of the screen. From there, I selected the option to Add Pull Zone and that whisked me away to the zone creation wizard.
Once again, I gave the zone a name (my-pull-zone). I left the origin settings configured to pull from my new storage zone, and also left it on the standard tier. I left all the pricing zones enabled so that the content can be served from whatever region is closest. (Even with all pricing zones activate, my delivery costs will still be just $0.01 to $0.06/GB.)
After admiring the magnificence of my new pull zone, I clicked the menu button at the top right and select Copy Pull Zone ID and made a note of that as well.
GitHub Action I found the bunnycdn-storage-deploy↗ Action which makes it easy to upload content to a Bunny storage zone and also purge the cache of the pull zone at the same time. For that to work, I had to add a few new action secrets↗ to my GitHub repo:
Name Sample Value Description BUNNY_API_KEY 34b05c3b-[...]-1b0f8cd6f0af Used for purging the cache. Find it under Bunny's Account Settings↗ BUNNY_STORAGE_ENDPOINT ny.storage.bunnycdn.com Hostname from the storage zone's FTP &amp; API Access page BUNNY_STORAGE_NAME my-storage-zone Name of the storage zone, which is also the username BUNNY_STORAGE_PASSWORD 7cb197e5-[...]-ad35820c0de8 Get it from the storage zone's FTP &amp; API Access page BUNNY_ZONE_ID 12345 The pull zone ID you copied earlier Then I just updated my deployment workflow↗ to swap the Bunny action in place of the Neocities one (and adjust the 404 page for bunny↗):
# torchlight! {&#34;lineNumbers&#34;:true} name: Deploy to Production # [tl! collapse:start] # only run on changes to main on: schedule: - cron: 0 13 * * * workflow_dispatch: push: branches: - main concurrency: # prevent concurrent deploys doing strange things group: deploy-prod cancel-in-progress: true # Default to bash defaults: run: shell: bash # [tl! collapse:end] jobs: deploy: name: Build and deploy Hugo site runs-on: ubuntu-latest steps: - name: Hugo setup # [tl! collapse:19] uses: peaceiris/[email protected] with: hugo-version: &#39;0.121.1&#39; extended: true - name: Checkout uses: actions/checkout@v4 with: submodules: recursive - name: Connect to Tailscale uses: tailscale/github-action@v2 with: oauth-client-id: \${{ secrets.TS_API_CLIENT_ID }} oauth-secret: \${{ secrets.TS_API_CLIENT_SECRET }} tags: \${{ secrets.TS_TAG }} - name: Configure SSH known hosts run: | mkdir -p ~/.ssh echo &#34;\${{ secrets.SSH_KNOWN_HOSTS }}&#34; &gt; ~/.ssh/known_hosts chmod 644 ~/.ssh/known_hosts # - name: Build with Hugo run: HUGO_REMOTE_FONT_PATH=\${{ secrets.REMOTE_FONT_PATH }} hugo --minify - name: Insert 404 page # [tl! **:4] run: | # [tl! ++:1,1 --:2,1] mkdir -p public/bunnycdn_errors cp public/404/index.html public/not_found.html cp public/404/index.html public/bunnycdn_errors/404.html - name: Insert 404 page # [tl! ++:-1,1 reindex(-1)] run: | cp public/404/index.html public/not_found.html - name: Highlight with Torchlight run: | npm i @torchlight-api/torchlight-cli TORCHLIGHT_TOKEN=\${{ secrets.TORCHLIGHT_TOKEN }} npx torchlight - name: Deploy HTML to Neocities # [tl! **:5 --:5] uses: bcomnes/deploy-to-neocities@v1 with: api_token: \${{ secrets.NEOCITIES_API_TOKEN }} cleanup: true dist_dir: public - name: Deploy to Bunny # [tl! **:19 ++:19 reindex(-6)] uses: ayeressian/[email protected] with: # copy from the &#39;public&#39; folder source: public # to the root of the storage zone destination: / # details for accessing the storage zone storageZoneName: &#34;\${{ secrets.BUNNY_STORAGE_NAME }}&#34; storagePassword: &#34;\${{ secrets.BUNNY_STORAGE_PASSWORD }}&#34; storageEndpoint: &#34;\${{ secrets.BUNNY_STORAGE_ENDPOINT }}&#34; # details for accessing the pull zone accessKey: &#34;\${{ secrets.BUNNY_API_KEY }}&#34; pullZoneId: &#34;\${{ secrets.BUNNY_ZONE_ID }}&#34; # upload files/folders upload: &#34;true&#34; # remove all remote files first (clean) remove: &#34;true&#34; # purge the pull zone cache purgePullZone: &#34;true&#34; - name: Deploy GMI to Agate run: | rsync -avz --delete --exclude=&#39;*.html&#39; --exclude=&#39;*.css&#39; --exclude=&#39;*.js&#39; -e ssh public/ deploy@\${{ secrets.GMI_HOST }}:\${{ secrets.GMI_CONTENT_PATH }} After committing and pushing this change, I checked my repo on GitHub to confirm that the workflow completed successfully:
And I also checked the Bunny storage zone to confirm that the site's contents had been copied there:
DNS With the site content in place, all remained was to switch over the DNS record. I needed to use a CNAME to point runtimeterror.dev to the new pull zone, and that meant I first had to delete the existing A record pointing to Neocities' IP address. I waited about thirty seconds to make sure that change really took hold before creating the new CNAME to link runtimeterror.dev to the new pull zone, my-pull-zone.b-cdn.net.
SSL The final piece was to go back to the pull zone's Hostnames configuration, add runtimeterror.dev as a custom hostname, and allow Bunny to automatically generate and install a cert for this hostname.
All done! That's it - everything I did to get my site shifted over to being hosted directly by Bunny. It seems quite a bit snappier to me during my initial testing, and I'm hoping that things will be a bit more stable going forward. (Even if I do run into any issues with this sestup, I'm pretty confident that Bunny's ulta-responsive support team↗ would be able to help me fix it.)
I'm really impressed with Bunny, and honestly can't recommend their platform highly enough. If you'd like to check it out, maybe use this referral link↗?
`,description:"After a few weeks of weird configuration glitches, I decided to migrate my static site from Neocities to Bunny CDN. Here's how I did it.",url:"https://runtimeterror.dev/further-down-the-bunny-hole/"},"https://runtimeterror.dev/the-slash-page-scoop/":{title:"The Slash Page Scoop",tags:["hugo","meta"],content:`Inspired by Robb Knight↗'s recent slash pages↗ site, I spent some time over the past week or two drafting some slash pages of my own.
Slash pages are common pages you can add to your website, usually with a standard, root-level slug like /now, /about, or /uses. They tend to describe the individual behind the site and are distinguishing characteristics of the IndieWeb.
On a blog that is otherwise organized in a fairly chronological manner, slash pages provide a way share information out-of-band. I think they're great for more static content (like an about page that says who I am) as well as for content that may be regularly updated (like a changelog).
The pages that I've implemented (so far) include:
/about tells a bit about me and my background /changelog is just starting to record some of visual/functional changes I make here /colophon describes the technology and services used in producing/hosting this site /homelab isn't a canonical slash page but it provides a lot of details about my homelab setup /save shamelessly hosts referral links for things I love and think you'll love too /uses shares the stuff I use on a regular basis And, of course, these are collected in one place at /slashes.
Feel free to stop here if you just want to check out the slash pages, or keep on reading for some nerd stuff about how I implemented them on my Hugo site.
Implementation All of my typical blog posts get created within the site's Hugo directory under content/posts/, like this one at content/posts/the-slash-page-scoop/index.md. They get indexed, automatically added to the list of posts on the home page, and show up in the RSS feed. I don't want my slash pages to get that treatment so I made them directly inside the content directory:
content ├── categories ├── posts ├── search ├── 404.md ├── _index.md ├── about.md [tl! ~~] ├── changelog.md [tl! ~~] ├── colophon.md [tl! ~~] ├── homelab.md [tl! ~~] ├── save.md [tl! ~~] ├── simplex.md └── uses.md [tl! ~~] Easy enough, but I didn't then want to have to worry about manually updating a list of slash pages so I used Hugo's Taxonomies↗ feature for that. I simply tagged each page with a new slashes category by adding it to the post's front matter:
# torchlight! {&#34;lineNumbers&#34;:true} --- title: &#34;/changelog&#34; date: &#34;2024-05-26&#34; lastmod: &#34;2024-05-30&#34; description: &#34;Maybe I should keep a log of all my site-related tinkering?&#34; categories: slashes # [tl! ~~] --- Category Names
I really wanted to name the category /slashes, but that seems to trip up Hugo a bit when it comes to creating an archive of category posts. So I settled for slashes and came up with some workarounds to make it present the way I wanted.
Hugo will automatically generate an archive page for a given taxonomy term (so a post tagged with the category slashes would be listed at $BASE_URL/category/slashes/), but I like to have a bit of control over how those archive pages are actually presented. So I create a new file at content/categories/slashes/_index.md and drop in this front matter:
# torchlight! {&#34;lineNumbers&#34;:true} --- title: /slashes url: /slashes aliases: - /categories/slashes description: &gt; My collection of slash pages. --- The slashes in the file path tells Hugo which taxonomy it belongs to and so it can match the appropriately-categorized posts.
Just like with normal posts, the title field defines the title (duh) of the post; this way I can label the archive page as /slashes instead of just slashes.
The url field lets me override where the page will be served, and I added /categories/slashes as an alias so that anyone who hits that canonical URL will be automatically redirected.
Setting a description lets me choose what introductory text will be displayed at the top of the index page, as well as when it's shown at the next higher level archive (like /categories/).
Of course, I'd like to include a link to slashpages.net↗ to provide a bit more info about what these pages are, and I can't add hyperlinks to the description text. What I can do is edit the template which is used for rendering the archive page. In my case, that's at layouts/partials/archive.html, and it starts out like this:
# torchlight! {&#34;lineNumbers&#34;:true} {{ $pages := .Pages }} {{ if .IsHome }} {{ $pages = where site.RegularPages &#34;Type&#34; &#34;in&#34; site.Params.mainSections }} {{ end }} &lt;header class=&#34;content__header&#34;&gt; {{ if .IsHome }} &lt;h1&gt;{{ site.Params.indexTitle | markdownify }}&lt;/h1&gt; {{ else }} &lt;h1&gt;{{ .Title | markdownify }}{{ if eq .Kind &#34;term&#34; }}&amp;nbsp;&lt;a href=&#34;{{ .Permalink }}feed.xml&#34; aria-label=&#34;Category RSS&#34;&gt;&lt;i class=&#34;fa-solid fa-square-rss&#34;&gt;&lt;/i&gt;&lt;/a&gt;&amp;nbsp;&lt;/h1&gt; &lt;!-- [tl! ~~] --&gt; {{ with .Description }}&lt;i&gt;{{ . }}&lt;/i&gt;&lt;hr&gt;{{ else }}&lt;br&gt;{{ end }} {{ end }}{{ end }} {{ .Content }} &lt;/header&gt; Line 9 is where I had already modified the template to conditionally add an RSS link for category archive pages. I'm going to tweak the setup a bit to conditionally render designated text when the page .Title matches /slashes:
# torchlight! {&#34;lineNumbers&#34;:true} {{ $pages := .Pages }} {{ if .IsHome }} {{ $pages = where site.RegularPages &#34;Type&#34; &#34;in&#34; site.Params.mainSections }} {{ end }} &lt;header class=&#34;content__header&#34;&gt; {{ if .IsHome }} &lt;h1&gt;{{ site.Params.indexTitle | markdownify }}&lt;/h1&gt; {{ else }} {{ if eq .Title &#34;/slashes&#34; }} &lt;!-- [tl! **:3 ++:3 ] --&gt; &lt;h1&gt;{{ .Title | markdownify }}&lt;/h1&gt; &lt;i&gt;My collection of &lt;a title=&#34;what&#39;s a slashpage?&#34; href=&#34;https://slashpages.net&#34;&gt;slash pages↗&lt;/a&gt;.&lt;/i&gt;&lt;hr&gt; {{ else }} &lt;h1&gt;{{ .Title | markdownify }}{{ if eq .Kind &#34;term&#34; }}&amp;nbsp;&lt;a href=&#34;{{ .Permalink }}feed.xml&#34; aria-label=&#34;Category RSS&#34;&gt;&lt;i class=&#34;fa-solid fa-square-rss&#34;&gt;&lt;/i&gt;&lt;/a&gt;&amp;nbsp;&lt;/h1&gt; {{ with .Description }}&lt;i&gt;{{ . }}&lt;/i&gt;&lt;hr&gt;{{ else }}&lt;br&gt;{{ end }} {{ end }} &lt;!-- [tl! ** ++ ] --&gt; {{ end }}{{ end }} {{ .Content }} &lt;/header&gt; So instead of rendering the description I defined in the front matter the archive page will show:
My collection of slash pages↗.
While I'm at it, I'd like for the slash pages themselves to be listed in alphabetical order rather than sorted by date (like everything else on the site). The remainder of my layouts/partials/archive.html already handles a few different ways of displaying lists of content:
# torchlight! {&#34;lineNumbers&#34;:true} {{- if and (eq .Kind &#34;taxonomy&#34;) (eq .Title &#34;Tags&#34;) }} &lt;!-- [tl! reindex(15)] --&gt; {{/* /tags/ */}} &lt;div class=&#34;tagsArchive&#34;&gt; {{- range $key, $value := .Site.Taxonomies }} {{- $slicedTags := ($value.ByCount) }} {{- range $slicedTags }} {{- if eq $key &#34;tags&#34;}} &lt;div&gt;&lt;a href=&#39;/{{ $key }}/{{ (replace .Name &#34;#&#34; &#34;%23&#34;) | urlize }}/&#39; title=&#34;{{ .Name }}&#34;&gt;{{ .Name }}&lt;/a&gt;&lt;sup&gt;{{ .Count }}&lt;/sup&gt;&lt;/div&gt; {{- end }} {{- end }} {{- end }} &lt;/div&gt; {{- else if eq .Kind &#34;taxonomy&#34; }} {{/* /categories/ */}} {{- $sorted := sort $pages &#34;Title&#34; }} {{- range $sorted }} {{- $postDate := .Date.Format &#34;2006-01-02&#34; }} {{- $updateDate := .Lastmod.Format &#34;2006-01-02&#34; }} &lt;article class=&#34;post&#34;&gt; &lt;header class=&#34;post__header&#34;&gt; &lt;h1&gt;&lt;a href=&#34;{{ .Permalink }}&#34;&gt;{{ .Title | markdownify }}&lt;/a&gt;&lt;/h1&gt; &lt;p class=&#34;post__meta&#34;&gt; &lt;span class=&#34;date&#34;&gt;[&#34;{{ with $updateDate }}{{ . }}{{ else }}{{ $postDate }}{{ end }}&#34;]&lt;/span&gt; &lt;/p&gt; &lt;/header&gt; &lt;section class=&#34;post__summary&#34;&gt; {{ .Description }} &lt;/section&gt; &lt;hr&gt; &lt;/article&gt; {{ end }} {{- else }} {{/* regular posts archive */}} {{- range (.Paginate $pages).Pages }} {{- $postDate := .Date.Format &#34;2006-01-02&#34; }} {{- $updateDate := .Lastmod.Format &#34;2006-01-02&#34; }} &lt;article class=&#34;post&#34;&gt; &lt;header class=&#34;post__header&#34;&gt; &lt;h1&gt;&lt;a href=&#34;{{ .Permalink }}&#34;&gt;{{ .Title | markdownify }}&lt;/a&gt;&lt;/h1&gt; &lt;p class=&#34;post__meta&#34;&gt; &lt;span class=&#34;date&#34;&gt;[&#34;{{- $postDate }}&#34;{{- if ne $postDate $updateDate }}, &#34;{{ $updateDate }}&#34;{{ end }}]&lt;/span&gt; &lt;/p&gt; &lt;/header&gt; &lt;section class=&#34;post__summary&#34;&gt; {{if .Description }}{{ .Description }}{{ else }}{{ .Summary }}{{ end }} &lt;/section&gt; &lt;hr&gt; &lt;/article&gt; {{- end }} {{- template &#34;_internal/pagination.html&#34; . }} {{- end }} The /tags/ archive uses a condensed display format which simply shows the tag name and the number of posts with that tag. Other taxonomy archives (like /categories) are sorted by title, displayed with a brief description, and the date that a post in the categories was published or updated. Archives of posts are sorted by date (most recent first) and include the post description (or summary if it doesn't have one), and both the publish and updated dates. I'll just tweak the second condition there to check for either a taxonomy archive or a page with the title /slashes:
# torchlight! {&#34;lineNumbers&#34;:true} {{- if and (eq .Kind &#34;taxonomy&#34;) (eq .Title &#34;Tags&#34;) }} &lt;!-- [tl! collapse:start reindex(20)] --&gt; {{/* /tags/ */}} &lt;div class=&#34;tagsArchive&#34;&gt; {{- range $key, $value := .Site.Taxonomies }} {{- $slicedTags := ($value.ByCount) }} {{- range $slicedTags }} {{- if eq $key &#34;tags&#34;}} &lt;div&gt;&lt;a href=&#39;/{{ $key }}/{{ (replace .Name &#34;#&#34; &#34;%23&#34;) | urlize }}/&#39; title=&#34;{{ .Name }}&#34;&gt;{{ .Name }}&lt;/a&gt;&lt;sup&gt;{{ .Count }}&lt;/sup&gt;&lt;/div&gt; {{- end }} {{- end }} {{- end }} &lt;!-- [tl! collapse:end] --&gt; &lt;/div&gt; {{- else if eq .Kind &#34;taxonomy&#34; }} &lt;!-- [tl! **:3 --] --&gt; {{- else if or (eq .Kind &#34;taxonomy&#34;) (eq .Title &#34;/slashes&#34;) }} &lt;!-- [tl! ++ reindex(-1)] --&gt; {{/* /categories/ */}} &lt;!-- [tl! --] --&gt; {{/* /categories/ or /slashes/ */}} &lt;!-- [tl! ++ reindex(-1)] --&gt; {{- $sorted := sort $pages &#34;Title&#34; }} {{- range $sorted }} {{- $postDate := .Date.Format &#34;2006-01-02&#34; }} {{- $updateDate := .Lastmod.Format &#34;2006-01-02&#34; }} &lt;article class=&#34;post&#34;&gt; &lt;header class=&#34;post__header&#34;&gt; &lt;h1&gt;&lt;a href=&#34;{{ .Permalink }}&#34;&gt;{{ .Title | markdownify }}&lt;/a&gt;&lt;/h1&gt; &lt;p class=&#34;post__meta&#34;&gt; &lt;span class=&#34;date&#34;&gt;[&#34;{{ with $updateDate }}{{ . }}{{ else }}{{ $postDate }}{{ end }}&#34;]&lt;/span&gt; &lt;/p&gt; &lt;/header&gt; &lt;section class=&#34;post__summary&#34;&gt; {{ .Description }} &lt;/section&gt; &lt;hr&gt; &lt;/article&gt; {{ end }} {{- else }} &lt;!-- [tl! collapse:start] --&gt; {{/* regular posts archive */}} {{- range (.Paginate $pages).Pages }} {{- $postDate := .Date.Format &#34;2006-01-02&#34; }} {{- $updateDate := .Lastmod.Format &#34;2006-01-02&#34; }} &lt;article class=&#34;post&#34;&gt; &lt;header class=&#34;post__header&#34;&gt; &lt;h1&gt;&lt;a href=&#34;{{ .Permalink }}&#34;&gt;{{ .Title | markdownify }}&lt;/a&gt;&lt;/h1&gt; &lt;p class=&#34;post__meta&#34;&gt; &lt;span class=&#34;date&#34;&gt;[&#34;{{- $postDate }}&#34;{{- if ne $postDate $updateDate }}, &#34;{{ $updateDate }}&#34;{{ end }}]&lt;/span&gt; &lt;/p&gt; &lt;/header&gt; &lt;section class=&#34;post__summary&#34;&gt; {{if .Description }}{{ .Description }}{{ else }}{{ .Summary }}{{ end }} &lt;/section&gt; &lt;hr&gt; &lt;/article&gt; {{- end }} {{- template &#34;_internal/pagination.html&#34; . }} {{- end }} &lt;!-- [tl! collapse:end] --&gt; So that's got the /slashes page looking the way I want it to. The last tweak will be to the template I use for displaying related (ie, in the same category) posts in the sidebar. The magic for that happens in layouts/partials/aside.html:
# torchlight! {&#34;lineNumbers&#34;:true} {{ if .Params.description }}&lt;p&gt;{{ .Params.description }}&lt;/p&gt;&lt;hr&gt;{{ end }} &lt;!-- [tl! collapse:start] --&gt; {{ if and (gt .WordCount 400 ) (gt (len .TableOfContents) 180) }} &lt;p&gt; &lt;h3&gt;On this page&lt;/h3&gt; {{ .TableOfContents }} &lt;hr&gt; &lt;/p&gt; {{ end }} &lt;!-- [tl! collapse:end] --&gt; {{ if isset .Params &#34;categories&#34; }} &lt;!--[tl! **:start] --&gt; {{$related := where .Site.RegularPages &#34;.Params.categories&#34; &#34;eq&#34; .Params.categories }} {{- $relatedLimit := default 8 .Site.Params.numberOfRelatedPosts }} {{ if eq .Params.categories &#34;slashes&#34; }} &lt;!-- [tl! ++:10] --&gt; &lt;h3&gt;More /slashes&lt;/h3&gt; {{ $sortedPosts := sort $related &#34;Title&#34; }} &lt;ul&gt; {{- range $sortedPosts }} &lt;li&gt; &lt;a href=&#34;{{ .Permalink }}&#34; title=&#34;{{ .Title }}&#34;&gt;{{ .Title | markdownify }}&lt;/a&gt; &lt;/li&gt; {{ end }} &lt;/ul&gt; {{ else }} &lt;h3&gt;More {{ .Params.categories }}&lt;/h3&gt; &lt;ul&gt; {{- range first $relatedLimit $related }} &lt;li&gt; &lt;a href=&#34;{{ .Permalink }}&#34; title=&#34;{{ .Title }}&#34;&gt;{{ .Title | markdownify }}&lt;/a&gt; &lt;/li&gt; {{ end }} {{ if gt (len $related) $relatedLimit }} &lt;li&gt; &lt;a href=&#34;/categories/{{ lower .Params.categories }}/&#34;&gt;&lt;i&gt;See all {{ .Params.categories }}&lt;/i&gt;&lt;/a&gt; &lt;/li&gt; {{ end }} &lt;/ul&gt; {{ end }} &lt;!-- [tl! ++ **:end] --&gt; &lt;hr&gt; {{ end }} {{- $posts := where .Site.RegularPages &#34;Type&#34; &#34;in&#34; .Site.Params.mainSections }} &lt;!-- [tl! collase:12] --&gt; {{- $featured := default 8 .Site.Params.numberOfFeaturedPosts }} {{- $featuredPosts := first $featured (where $posts &#34;Params.featured&#34; true)}} {{- with $featuredPosts }} &lt;h3&gt;Featured Posts&lt;/h3&gt; &lt;ul&gt; {{- range . }} &lt;li&gt; &lt;a href=&#34;{{ .Permalink }}&#34; title=&#34;{{ .Title }}&#34;&gt;{{ .Title | markdownify }}&lt;/a&gt; &lt;/li&gt; {{- end }} &lt;/ul&gt; {{- end }} So now if you visit any of my slash pages (like, say, /colophon) you'll see the alphabetized list of other slash pages in the side bar.
Closing I'll probably keep tweaking these slash pages in the coming days, but for now I'm really glad to finally have them posted. I've only thinking about doing this for the past six months.
`,description:"I've added new slash pages to the site to share some background info on who I am, what I use, and how this site works.",url:"https://runtimeterror.dev/the-slash-page-scoop/"},"https://runtimeterror.dev/prettify-hugo-rss-feed-xslt/":{title:"Prettify Hugo RSS Feeds with XSLT",tags:["hugo","meta"],content:`I put in some work several months back making my sure my site's RSS would work well in a feed reader. This meant making a lot of modifications to the default Hugo RSS template↗. I made it load the full article text rather than just the summary, present correctly-formatted code blocks with no loss of important whitespace, include inline images, and even pass online validation checks:
↗
But while the feed looks great when rendered by a reader, the browser presentation left some to be desired...
It feels like there should be a friendlier way to present a feed &quot;landing page&quot; to help users new to RSS figure out what they need to do in order to follow a blog - and there absolutely is. In much the same way that you can prettify plain HTML with the inclusion of a CSS stylesheet, you can also style boring XML using eXtensible Stylesheet Language Transformations (XSLT)↗.
This post will quickly cover how I used XSLT to style my blog's RSS feed and made it look like this:
Starting Point The RSS Templates↗ page from the Hugo documentation site provides some basic information about how to generate (and customize) an RSS feed for a Hugo-powered site. The basic steps are to enable the RSS output in hugo.toml↗, include a link to the generated feed inside the &lt;head&gt; element of the site template (I added it to layouts/partials/head.html↗), and (optionally) include a customized RSS template to influence how the output gets rendered.
Here's the content of my layouts/_default/rss.xml, before adding the XSLT styling:
# torchlight! {&#34;lineNumbers&#34;: true} {{- $pctx := . -}} {{- if .IsHome -}}{{ $pctx = .Site }}{{- end -}} {{- $pages := slice -}} {{- if or $.IsHome $.IsSection -}} {{- $pages = (where $pctx.RegularPages &#34;Type&#34; &#34;in&#34; site.Params.mainSections) -}} {{- else -}} {{- $pages = (where $pctx.Pages &#34;Type&#34; &#34;in&#34; site.Params.mainSections) -}} {{- end -}} {{- $limit := .Site.Config.Services.RSS.Limit -}} {{- if ge $limit 1 -}} {{- $pages = $pages | first $limit -}} {{- end -}} {{- printf &#34;&lt;?xml version=\\&#34;1.0\\&#34; encoding=\\&#34;utf-8\\&#34; standalone=\\&#34;yes\\&#34;?&gt;&#34; | safeHTML }} &lt;rss version=&#34;2.0&#34; xmlns:atom=&#34;http://www.w3.org/2005/Atom&#34; xmlns:content=&#34;http://purl.org/rss/1.0/modules/content/&#34; xmlns:dc=&#34;http://purl.org/dc/elements/1.1/&#34;&gt; &lt;channel&gt; &lt;title&gt;{{ if eq .Title .Site.Title }}{{ .Site.Title }}{{ else }}{{ with .Title }}{{.}} on {{ end }}{{ .Site.Title }}{{ end }}&lt;/title&gt; &lt;link&gt;{{ .Permalink }}&lt;/link&gt; &lt;description&gt;Recent content {{ if ne .Title .Site.Title }}{{ with .Title }}in {{.}} {{ end }}{{ end }}on {{ .Site.Title }}&lt;/description&gt; &lt;generator&gt;Hugo -- gohugo.io&lt;/generator&gt;{{ with .Site.LanguageCode }} &lt;language&gt;{{.}}&lt;/language&gt;{{end}}{{ with .Site.Copyright }} &lt;copyright&gt;{{.}}&lt;/copyright&gt;{{end}}{{ if not .Date.IsZero }} &lt;lastBuildDate&gt;{{ .Date.Format &#34;Mon, 02 Jan 2006 15:04:05 -0700&#34; | safeHTML }}&lt;/lastBuildDate&gt;{{ end }} {{- with .OutputFormats.Get &#34;RSS&#34; -}} {{ printf &#34;&lt;atom:link href=%q rel=\\&#34;self\\&#34; type=%q /&gt;&#34; .Permalink .MediaType | safeHTML }} {{- end -}} &lt;image&gt; &lt;url&gt;{{ .Site.Params.fallBackOgImage | absURL }}&lt;/url&gt; &lt;title&gt;{{ if eq .Title .Site.Title }}{{ .Site.Title }}{{ else }}{{ with .Title }}{{.}} on {{ end }}{{ .Site.Title }}{{ end }}&lt;/title&gt; &lt;link&gt;{{ .Permalink }}&lt;/link&gt; &lt;/image&gt; {{ range $pages }} &lt;item&gt; &lt;title&gt;{{ .Title | plainify }}&lt;/title&gt; &lt;link&gt;{{ .Permalink }}&lt;/link&gt; &lt;pubDate&gt;{{ .Date.Format &#34;Mon, 02 Jan 2006 15:04:05 -0700&#34; | safeHTML }}&lt;/pubDate&gt; {{ with .Site.Params.Author.name }}&lt;dc:creator&gt;{{.}}&lt;/dc:creator&gt;{{ end }} {{ with .Params.series }}&lt;category&gt;{{ . | lower }}&lt;/category&gt;{{ end }} {{ range (.GetTerms &#34;tags&#34;) }} &lt;category&gt;{{ .LinkTitle }}&lt;/category&gt;{{ end }} &lt;guid&gt;{{ .Permalink }}&lt;/guid&gt; {{- $content := replaceRE &#34;a href=\\&#34;(#.*?)\\&#34;&#34; (printf &#34;%s%s%s&#34; &#34;a href=\\&#34;&#34; .Permalink &#34;$1\\&#34;&#34;) .Content -}} {{- $content = replaceRE &#34;img src=\\&#34;(.*?)\\&#34;&#34; (printf &#34;%s%s%s&#34; &#34;img src=\\&#34;&#34; .Permalink &#34;$1\\&#34;&#34;) $content -}} {{- $content = replaceRE &#34;&lt;svg.*&lt;/svg&gt;&#34; &#34;&#34; $content -}} {{- $content = replaceRE \`-moz-tab-size:\\d;-o-tab-size:\\d;tab-size:\\d;?\` &#34;&#34; $content -}} &lt;description&gt;{{ $content | html }}&lt;/description&gt; &lt;/item&gt; {{ end }} &lt;/channel&gt; &lt;/rss&gt; There's a lot going on here, but much of it is conditional logic so that Hugo can use the same template to render feeds for individual tags, categories, or the entire site. It then loops through the range of pages of that type to generate the data for each post. It also uses the Hugo strings.ReplaceRE function↗ to replace relative image and anchor links with the full paths so those references will work correctly in readers, and to clean up some potentially-problematic HTML markup that was causing validation failures.
All I really need to do to get this XML ready to be styled is just link in a style sheet, and I'll do that by inserting a &lt;?xml-stylesheet /&gt; element directly below the top-level &lt;?xml /&gt; one:
# torchlight! {&#34;lineNumbers&#34;: true, &#34;lineNumbersStart&#34;: 10} {{- if ge $limit 1 -}} {{- $pages = $pages | first $limit -}} {{- end -}} {{- printf &#34;&lt;?xml version=\\&#34;1.0\\&#34; encoding=\\&#34;utf-8\\&#34; standalone=\\&#34;yes\\&#34;?&gt;&#34; | safeHTML }} {{ printf &#34;&lt;?xml-stylesheet type=\\&#34;text/xsl\\&#34; href=\\&#34;/xml/feed.xsl\\&#34; media=\\&#34;all\\&#34;?&gt;&#34; | safeHTML }} &lt;!-- [tl! ++ ] --&gt; &lt;rss version=&#34;2.0&#34; xmlns:atom=&#34;http://www.w3.org/2005/Atom&#34; xmlns:content=&#34;http://purl.org/rss/1.0/modules/content/&#34; xmlns:dc=&#34;http://purl.org/dc/elements/1.1/&#34;&gt; I'll put the stylesheet in static/xml/feed.xsl so Hugo will make it available at the appropriate path.
Creating the Style While trying to figure out how I could dress up my RSS XML, I came across the default XSL file↗ provided with the Nikola SSG↗, and I thought it looked like a pretty good starting point.
Here's Nikola's default XSL:
# torchlight! {&#34;lineNumbers&#34;: true} &lt;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&gt; &lt;xsl:stylesheet xmlns:xsl=&#34;http://www.w3.org/1999/XSL/Transform&#34; xmlns:atom=&#34;http://www.w3.org/2005/Atom&#34; xmlns:dc=&#34;http://purl.org/dc/elements/1.1/&#34; version=&#34;1.0&#34;&gt; &lt;xsl:output method=&#34;xml&#34;/&gt; &lt;xsl:template match=&#34;/&#34;&gt; &lt;html xmlns=&#34;http://www.w3.org/1999/xhtml&#34; lang=&#34;en&#34;&gt; &lt;head&gt; &lt;meta charset=&#34;UTF-8&#34;/&gt; &lt;meta name=&#34;viewport&#34; content=&#34;width=device-width&#34;/&gt; &lt;title&gt;&lt;xsl:value-of select=&#34;rss/channel/title&#34;/&gt; (RSS)&lt;/title&gt; &lt;style&gt;&lt;![CDATA[html{margin:0;padding:0;}body{color:hsl(180,1%,31%);font-family:Helvetica,Arial,sans-serif;font-size:17px;line-height:1.4;margin:5%;max-width:35rem;padding:0;}input{min-width:20rem;margin-left:.2rem;padding-left:.2rem;padding-right:.2rem;}ol{list-style-type:disc;padding-left:1rem;}h2{font-size:22px;font-weight:inherit;}]]&gt;&lt;/style&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;&lt;xsl:value-of select=&#34;rss/channel/title&#34;/&gt; (RSS)&lt;/h1&gt; &lt;p&gt;This is an &lt;abbr title=&#34;Really Simple Syndication&#34;&gt;RSS&lt;/abbr&gt; feed. To subscribe to it, copy its address and paste it when your feed reader asks for it. It will be updated periodically in your reader. New to feeds? &lt;a href=&#34;https://duckduckgo.com/?q=how+to+get+started+with+rss+feeds&#34; title=&#34;Search on the web to learn more&#34;&gt;Learn more&lt;/a&gt;.&lt;/p&gt; &lt;p&gt; &lt;label for=&#34;address&#34;&gt;RSS address:&lt;/label&gt; &lt;input&gt;&lt;xsl:attribute name=&#34;type&#34;&gt;url&lt;/xsl:attribute&gt;&lt;xsl:attribute name=&#34;id&#34;&gt;address&lt;/xsl:attribute&gt;&lt;xsl:attribute name=&#34;spellcheck&#34;&gt;false&lt;/xsl:attribute&gt;&lt;xsl:attribute name=&#34;value&#34;&gt;&lt;xsl:value-of select=&#34;rss/channel/atom:link[@rel=&#39;self&#39;]/@href&#34;/&gt;&lt;/xsl:attribute&gt;&lt;/input&gt; &lt;/p&gt; &lt;p&gt;Preview of the feed’s current headlines:&lt;/p&gt; &lt;ol&gt; &lt;xsl:for-each select=&#34;rss/channel/item&#34;&gt; &lt;li&gt;&lt;h2&gt;&lt;a&gt;&lt;xsl:attribute name=&#34;href&#34;&gt;&lt;xsl:value-of select=&#34;link&#34;/&gt;&lt;/xsl:attribute&gt;&lt;xsl:value-of select=&#34;title&#34;/&gt;&lt;/a&gt;&lt;/h2&gt;&lt;/li&gt; &lt;/xsl:for-each&gt; &lt;/ol&gt; &lt;/body&gt; &lt;/html&gt; &lt;/xsl:template&gt; &lt;/xsl:stylesheet&gt; If I just plug that in at static/xml/feed.xml, I do successfully get a styled (though very white) RSS page:
I'd like this to inherit the same styling as the rest of the site so that it looks like it belongs. I can go a long way toward that by bringing in the CSS stylesheets that are used on every page, and I'll also tweak the existing &lt;style /&gt; element to remove some conflicts with my preferred styling:
# torchlight! {&#34;lineNumbers&#34;: true, &#34;lineNumbersStart&#34;: 10} &lt;title&gt;&lt;xsl:value-of select=&#34;rss/channel/title&#34;/&gt; (RSS)&lt;/title&gt; &lt;style&gt;&lt;![CDATA[html{margin:0;padding:0;}body{color:hsl(180,1%,31%);font-family:Helvetica,Arial,sans-serif;font-size:17px;line-height:1.4;margin:5%;max-width:35rem;padding:0;}input{min-width:20rem;margin-left:.2rem;padding-left:.2rem;padding-right:.2rem;}ol{list-style-type:disc;padding-left:1rem;}h2{font-size:22px;font-weight:inherit;}]]&gt;&lt;/style&gt;&lt;title&gt;&lt;xsl:value-of select=&#34;rss/channel/title&#34;/&gt; (RSS)&lt;/title&gt; &lt;!-- [tl! remove ] --&gt; &lt;style&gt;&lt;![CDATA[html{margin:0;padding:0;}body{color:font-size:1.1rem;line-height:1.4;margin:5%;max-width:35rem;padding:0;}input{min-width:20rem;margin-left:.2rem;padding-left:.2rem;padding-right:.2rem;}h2{font-size:22px;font-weight:inherit;}h2:before{content:&#34;&#34; !important;}]]&gt;&lt;/style&gt; &lt;!-- [tl! ++:3 reindex(-1) ] --&gt; &lt;link rel=&#34;stylesheet&#34; href=&#34;/css/palettes/runtimeterror.css&#34; /&gt; &lt;link rel=&#34;stylesheet&#34; href=&#34;/css/risotto.css&#34; /&gt; &lt;link rel=&#34;stylesheet&#34; href=&#34;/css/custom.css&#34; /&gt; &lt;/head&gt; While I'm at it, I'll also go on and add in some favicons:
# torchlight! {&#34;lineNumbers&#34;: true, &#34;lineNumbersStart&#34;: 10} &lt;title&gt;&lt;xsl:value-of select=&#34;rss/channel/title&#34;/&gt; (RSS)&lt;/title&gt; &lt;style&gt;&lt;![CDATA[html{margin:0;padding:0;}body{color:font-size:1.1rem;line-height:1.4;margin:5%;max-width:35rem;padding:0;}input{min-width:20rem;margin-left:.2rem;padding-left:.2rem;padding-right:.2rem;}h2{font-size:22px;font-weight:inherit;}h2:before{content:&#34;&#34; !important;}]]&gt;&lt;/style&gt; &lt;link rel=&#34;apple-touch-icon&#34; sizes=&#34;180x180&#34; href=&#34;/icons/apple-touch-icon.png&#34; /&gt; &lt;!-- [tl! ++:5] --&gt; &lt;link rel=&#34;icon&#34; type=&#34;image/png&#34; sizes=&#34;32x32&#34; href=&#34;/icons/favicon-32x32.png&#34; /&gt; &lt;link rel=&#34;icon&#34; type=&#34;image/png&#34; sizes=&#34;16x16&#34; href=&#34;/icons/favicon-16x16.png&#34; /&gt; &lt;link rel=&#34;manifest&#34; href=&#34;/icons/site.webmanifest&#34; /&gt; &lt;link rel=&#34;mask-icon&#34; href=&#34;/icons/safari-pinned-tab.svg&#34; color=&#34;#5bbad5&#34; /&gt; &lt;link rel=&#34;shortcut icon&#34; href=&#34;/icons/favicon.ico&#34; /&gt; &lt;link rel=&#34;stylesheet&#34; href=&#34;/css/palettes/runtimeterror.css&#34; /&gt; &lt;link rel=&#34;stylesheet&#34; href=&#34;/css/risotto.css&#34; /&gt; &lt;link rel=&#34;stylesheet&#34; href=&#34;/css/custom.css&#34; /&gt; That's getting there:
Including those CSS styles means that the rendered page now uses my color palette and the font I worked so hard to integrate. I'm just going to make a few more tweaks to change some of the formatting, put the New to feeds? bit on its own line, and point to Mojeek↗ instead of DDG (why?↗).
Here's my final (for now) static/xml/feed.xsl file:
# torchlight! {&#34;lineNumbers&#34;: true} &lt;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&gt; &lt;!-- adapted from https://github.com/getnikola/nikola/blob/master/nikola/data/themes/base/assets/xml/rss.xsl --&gt; &lt;xsl:stylesheet xmlns:xsl=&#34;http://www.w3.org/1999/XSL/Transform&#34; xmlns:atom=&#34;http://www.w3.org/2005/Atom&#34; xmlns:dc=&#34;http://purl.org/dc/elements/1.1/&#34; version=&#34;1.0&#34;&gt; &lt;xsl:output method=&#34;xml&#34;/&gt; &lt;xsl:template match=&#34;/&#34;&gt; &lt;html xmlns=&#34;http://www.w3.org/1999/xhtml&#34; lang=&#34;en&#34;&gt; &lt;head&gt; &lt;meta charset=&#34;UTF-8&#34;/&gt; &lt;meta name=&#34;viewport&#34; content=&#34;width=device-width&#34;/&gt; &lt;title&gt;&lt;xsl:value-of select=&#34;rss/channel/title&#34;/&gt; (RSS)&lt;/title&gt; &lt;style&gt;&lt;![CDATA[html{margin:0;padding:0;}body{color:font-size:1.1rem;line-height:1.4;margin:5%;max-width:35rem;padding:0;}input{min-width:20rem;margin-left:.2rem;padding-left:.2rem;padding-right:.2rem;}h2{font-size:22px;font-weight:inherit;}h3:before{content:&#34;&#34; !important;}]]&gt;&lt;/style&gt; &lt;link rel=&#34;apple-touch-icon&#34; sizes=&#34;180x180&#34; href=&#34;/icons/apple-touch-icon.png&#34; /&gt; &lt;link rel=&#34;icon&#34; type=&#34;image/png&#34; sizes=&#34;32x32&#34; href=&#34;/icons/favicon-32x32.png&#34; /&gt; &lt;link rel=&#34;icon&#34; type=&#34;image/png&#34; sizes=&#34;16x16&#34; href=&#34;/icons/favicon-16x16.png&#34; /&gt; &lt;link rel=&#34;manifest&#34; href=&#34;/icons/site.webmanifest&#34; /&gt; &lt;link rel=&#34;mask-icon&#34; href=&#34;/icons/safari-pinned-tab.svg&#34; color=&#34;#5bbad5&#34; /&gt; &lt;link rel=&#34;shortcut icon&#34; href=&#34;/icons/favicon.ico&#34; /&gt; &lt;link rel=&#34;stylesheet&#34; href=&#34;/css/palettes/runtimeterror.css&#34; /&gt; &lt;link rel=&#34;stylesheet&#34; href=&#34;/css/risotto.css&#34; /&gt; &lt;link rel=&#34;stylesheet&#34; href=&#34;/css/custom.css&#34; /&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;&lt;xsl:value-of select=&#34;rss/channel/title&#34;/&gt; (RSS)&lt;/h1&gt; &lt;p&gt;This is an &lt;abbr title=&#34;Really Simple Syndication&#34;&gt;RSS&lt;/abbr&gt; feed. To subscribe to it, copy its address and paste it when your feed reader asks for it. It will be updated periodically in your reader.&lt;/p&gt; &lt;p&gt;New to feeds? &lt;a href=&#34;https://www.mojeek.com/search?q=how+to+get+started+with+rss+feeds&#34; title=&#34;Search on the web to learn more&#34;&gt;Learn more&lt;/a&gt;.&lt;/p&gt; &lt;p&gt; &lt;label for=&#34;address&#34;&gt;RSS address:&lt;/label&gt; &lt;input&gt;&lt;xsl:attribute name=&#34;type&#34;&gt;url&lt;/xsl:attribute&gt;&lt;xsl:attribute name=&#34;id&#34;&gt;address&lt;/xsl:attribute&gt;&lt;xsl:attribute name=&#34;spellcheck&#34;&gt;false&lt;/xsl:attribute&gt;&lt;xsl:attribute name=&#34;value&#34;&gt;&lt;xsl:value-of select=&#34;rss/channel/atom:link[@rel=&#39;self&#39;]/@href&#34;/&gt;&lt;/xsl:attribute&gt;&lt;/input&gt; &lt;/p&gt; &lt;p&gt;&lt;h2&gt;Recent posts:&lt;/h2&gt;&lt;/p&gt; &lt;ul&gt; &lt;xsl:for-each select=&#34;rss/channel/item&#34;&gt; &lt;li&gt;&lt;h3&gt;&lt;a&gt;&lt;xsl:attribute name=&#34;href&#34;&gt;&lt;xsl:value-of select=&#34;link&#34;/&gt;&lt;/xsl:attribute&gt;&lt;xsl:value-of select=&#34;title&#34;/&gt;&lt;/a&gt;&lt;/h3&gt;&lt;/li&gt; &lt;/xsl:for-each&gt; &lt;/ul&gt; &lt;/body&gt; &lt;/html&gt; &lt;/xsl:template&gt; &lt;/xsl:stylesheet&gt; I'm pretty pleased with that result!
`,description:"Making my Hugo-generated RSS XML look as good to human visitors as it does to feed readers.",url:"https://runtimeterror.dev/prettify-hugo-rss-feed-xslt/"},"https://runtimeterror.dev/using-custom-font-hugo/":{title:"Using a Custom Font with Hugo",tags:["bunny","cicd","cloudflare","hugo","meta","tailscale"],content:`Last week, I came across and immediately fell in love with a delightfully-retro monospace font called Berkeley Mono↗. I promptly purchased a &quot;personal developer&quot; license and set to work applying the font in my IDE and terminal↗. I didn't want to stop there, though; the license also permits me to use the font on my personal site, and Berkeley Mono will fit in beautifully with the whole runtimeterror aesthetic.
Well, you're looking at the slick new font here, and I'm about to tell you how I added the font both to the site itself and to the dynamically-generated OpenGraph share images setup. It wasn't terribly hard to implement, but the Hugo documentation is a bit light on how to do it (and I'm kind of inept at this whole web development thing).
Web Font This site's styling is based on the risotto theme for Hugo↗. Risotto uses the CSS variable --font-monospace in themes/risotto/static/css/typography.css to define the font face, and then that variable is inserted wherever the font may need to be set:
/* torchlight! {&#34;lineNumbers&#34;:true} */ /* Fonts */ :root { --font-monospace: &#34;Fira Mono&#34;, monospace; /* [tl! **] */ } body { font-family: var(--font-monospace); /* [tl! **] */ font-size: 16px; line-height: 1.5rem; } This makes it easy to override the theme's font by inserting my preferred font in static/custom.css:
/* font overrides */ :root { --font-monospace: &#39;Berkeley Mono&#39;, &#39;Fira Mono&#39;, monospace; /* [tl! **] */ } And that would be the end of things if I could expect that everyone who visited my site already had the Berkeley Mono font installed; if they don't, though, the site will fallback to either the Fira Mono font or whatever generic monospace font is on the system. So maybe I'll add a few other monospace fonts just for good measure:
/* font overrides */ :root { --font-monospace: &#39;Berkeley Mono&#39;, &#39;IBM Plex Mono&#39;, &#39;Cascadia Mono&#39;, &#39;Roboto Mono&#39;, &#39;Source Code Pro&#39;, &#39;Fira Mono&#39;, &#39;Courier New&#39;, monospace; /* [tl! **] */ } That provides a few more options to fall back to if the preferred font isn't available. But let's see about making that font available.
Hosted Locally I can use a @font-face rule to tell the browser how to find the .woff2 file for my preferred web font, and I could just set the src: url parameter to point to a local path in my Hugo environment:
/* load preferred font */ @font-face { font-family: &#39;Berkeley Mono&#39;; font-style: normal; font-weight: 400; /* use the installed font with this name if it&#39;s there... */ src: local(&#39;Berkeley Mono&#39;), /* otherwise look at these paths */ url(&#39;/fonts/BerkeleyMono.woff2&#39;) format(&#39;woff2&#39;), } WOFF2 vs WOFF(1)
A previous version of this post also included the .woff file in addition to .woff2. A kind reader let me know that basically everything↗ supports .woff2, and since .woff2 offers much better compression than first-generation .woff there really isn't any reason to offer a font in .woff format in this modern age. I can just offer .woff2 on its own.
I've updated this post, my CSS, and the contents of my CDN storage accordingly.
And that would work just fine... but it would require storing those web font files in the (public) GitHub repo↗ which powers my site, and I'd rather not store any paid font files there.
So instead, I opted to try using a Content Delivery Network (CDN)↗ to host the font files. This would allow for some degree of access control, help me learn more about a web technology I hadn't played with much, and make use of a cool cdn.* subdomain in the process.
Double the CDN, double the fun
Of course, while writing this post I gave in to my impulsive nature and migrated the site from Cloudflare to Bunny.net↗. Rather than scrap the content I'd already written, I'll go ahead and describe how I set this up first on Cloudflare R2↗ and later on Bunny Storage↗.
Cloudflare R2 Getting started with R2 was really easy; I just created a new R2 bucket↗ called runtimeterror and connected it to the custom domain↗ cdn.runtimeterror.dev. I put the two web font files in a folder titled fonts and uploaded them to the bucket so that they can be accessed under https://cdn.runtimeterror.dev/fonts/.
I could then employ a Cross-Origin Resource Sharing (CORS)↗ policy to ensure the fonts hosted on my fledgling CDN can only be loaded on my site. I configured the policy to also allow access from my localhost Hugo build environment as well as a preview Neocities environment I use for testing such major changes:
[ { &#34;AllowedOrigins&#34;: [ &#34;http://localhost:1313&#34;, &#34;https://secret--runtimeterror--preview.neocities.org&#34;, &#34;https://runtimeterror.dev&#34; ], &#34;AllowedMethods&#34;: [ &#34;GET&#34; ] } ] Then I just needed to update the @font-face rule accordingly:
/* load preferred font */ @font-face { font-family: &#39;Berkeley Mono&#39;; font-style: normal; font-weight: 400; font-display: fallback; /* [tl! ++] */ src: local(&#39;Berkeley Mono&#39;), url(&#39;/fonts/BerkeleyMono.woff2&#39;) format(&#39;woff2&#39;), /* [tl! --] */ url(&#39;https://cdn.runtimeterror.dev/fonts/BerkeleyMono.woff2&#39;) format(&#39;woff2&#39;), /* [tl! ++] */ } I added in the font-display: fallback; descriptor to address the fact that the site will now be loading a remote font. Rather than blocking and not rendering text until the preferred font is loaded, it will show text in one of the available fallback fonts. If the preferred font loads quickly enough, it will be swapped in; otherwise, it will just show up on the next page load. I figured this was a good middle-ground between wanting the site to load quickly while also looking the way I want it to.
To test my work, I ran hugo server to build and serve the site locally on http://localhost:1313... and promptly encountered a cascade of CORS-related errors. I kept tweaking the policy and trying to learn more about what I'm doing (reminder: I'm bad at this), but just couldn't figure out what was preventing the font from being loaded.
I eventually discovered that sometimes you need to clear Cloudflare's cache so that new policy changes will take immediate effect. Once I purged everything↗, the errors went away and the font loaded successfully.
Bunny Storage After migrating my domain to Bunny.net, the CDN font setup was pretty similar - but also different enough that it's worth mentioning. I started by creating a new Storage Zone named runtimeterror-storage, and selecting an appropriate-seeming set of replication regions. I then uploaded the same fonts/ folder as before.
To be able to access the files in Bunny Storage, I connected a new Pull Zone (called runtimeterror-pull) and linked that Pull Zone with the cdn.runtimeterror.dev hostname. I also made sure to enable the option to automatically generate a certificate for this host.
Rather than needing me to understand CORS and craft a viable policy file, Bunny provides a clean UI with easy-to-understand options for configuring the pull zone security. I enabled the options to block root path access, block POST requests, block direct file access, and also added the same trusted referrers as before:
I made sure to use the same paths as I had on Cloudflare so I didn't need to update the Hugo config at all even after changing CDNs. That same CSS from before still works:
/* load preferred font */ @font-face { font-family: &#39;Berkeley Mono&#39;; font-style: normal; font-weight: 400; font-display: fallback; src: local(&#39;Berkeley Mono&#39;), url(&#39;https://cdn.runtimeterror.dev/fonts/BerkeleyMono.woff2&#39;) format(&#39;woff2&#39;), } I again tested locally with hugo server and confirmed that the font loaded from Bunny CDN without any CORS or other errors.
So that's the web font for the web site sorted (twice); now let's tackle the font in the OpenGraph share images.
Image Filter Text My setup for generating the share images leverages the Hugo images.Text↗ function to overlay text onto a background image, and it needs a TrueType font in order to work. I was previously just storing the required font directly in my GitHub repo so that it would be available during the site build, but I definitely don't want to do that with a paid TrueType font file. So I needed to come up with some way to provide the TTF file to the GitHub runner without making it publicly available.
I recently figured out how I could use a GitHub Action to easily connect the runner to my Tailscale environment, and I figured I could re-use that idea here - only instead of pushing something to my tailnet, I'll be pulling something out.
Tailscale Setup So I SSH'd to the cloud server I'm already using for hosting my Gemini capsule, created a folder to hold the font file (/opt/fonts/), and copied the TTF file into there. And then I used Tailscale Serve to publish that folder internally to my tailnet:
sudo tailscale serve --bg --set-path /fonts /opt/fonts/ # [tl! .cmd] # [tl! .nocopy:4] Available within your tailnet: https://node.tailnet-name.ts.net/fonts/ |-- path /opt/fonts The --bg flag will run the share in the background and automatically start it with the system (like a daemon-mode setup).
When I set up Tailscale for the Gemini capsule workflow, I configured the Tailscale ACL so that the GitHub runner (tag:gh-bld) could talk to my server (tag:gh-srv) over SSH:
&#34;acls&#34;: [ { // github runner can talk to the deployment target &#34;action&#34;: &#34;accept&#34;, &#34;users&#34;: [&#34;tag:gh-bld&#34;], &#34;ports&#34;: [ &#34;tag:gh-srv:22&#34; ], } ], I needed to update that ACL to allow communication over HTTPS as well:
&#34;acls&#34;: [ { // github runner can talk to the deployment target &#34;action&#34;: &#34;accept&#34;, &#34;users&#34;: [&#34;tag:gh-bld&#34;], &#34;ports&#34;: [ &#34;tag:gh-srv:22&#34;, &#34;tag:gh-srv:443&#34; // [tl! ++] ], } ], I then logged into the Tailscale admin panel to follow the same steps as last time to generate a unique OAuth client↗ tied to the tag:gh-bld tag. I stored the ID, secret, and tags as repository secrets named TS_API_CLIENT_ID, TS_API_CLIENT_SECRET, and TS_TAG.
I also created a REMOTE_FONT_PATH secret which will be used to tell Hugo where to find the required TTF file (https://node.tailnet-name.ts.net/fonts/BerkeleyMono.ttf).
Hugo Setup Here's the image-related code that I was previously using in layouts/partials/opengraph to create the OpenGraph images:
{{ $img := resources.Get &#34;og_base.png&#34; }} {{ $font := resources.Get &#34;/FiraMono-Regular.ttf&#34; }} {{ $text := &#34;&#34; }} {{- if .IsHome }} {{ $text = .Site.Params.Description }} {{- end }} {{- if .IsPage }} {{ $text = .Page.Title }} {{ end }} {{- with .Params.thumbnail }} {{ $thumbnail := $.Resources.Get . }} {{ with $thumbnail }} {{ $img = $img.Filter (images.Overlay (.Process &#34;fit 300x250&#34;) 875 38 )}} {{ end }} {{ end }} {{ $img = $img.Filter (images.Text $text (dict &#34;color&#34; &#34;#d8d8d8&#34; &#34;size&#34; 64 &#34;linespacing&#34; 2 &#34;x&#34; 40 &#34;y&#34; 300 &#34;font&#34; $font ))}} {{ $img = resources.Copy (path.Join $.Page.RelPermalink &#34;og.png&#34;) $img }} All I need to do is get it to pull the font resource from a web address rather than the local file system, and I'll do that by loading an environment variable instead of hardcoding the path here.
Hugo Environent Variable Access
By default, Hugo's os.Getenv function only has access to environment variables which start with HUGO_. You can adjust the security configuration↗ to alter this restriction if needed, but I figured I could work just fine within the provided constraints.
{{ $img := resources.Get &#34;og_base.png&#34; }} {{ $font := resources.Get &#34;/FiraMono-Regular.ttf&#34; }} &lt;!-- [tl! -- ] --&gt; {{ $text := &#34;&#34; }} {{ $font := &#34;&#34; }} &lt;!-- [tl! ++:10 **:10 ]&gt; {{ $path := os.Getenv &#34;HUGO_REMOTE_FONT_PATH&#34; }} {{ with resources.GetRemote $path }} {{ with .Err }} {{ errorf &#34;%s&#34; . }} {{ else }} {{ $font = . }} {{ end }} {{ else }} {{ errorf &#34;Unable to get resource %q&#34; $path }} {{ end }} {{- if .IsHome }} {{ $text = .Site.Params.Description }} {{- end }} &lt;!-- [tl! collapse:start ] --&gt; {{- if .IsPage }} {{ $text = .Page.Title }} {{ end }} {{- with .Params.thumbnail }} {{ $thumbnail := $.Resources.Get . }} {{ with $thumbnail }} {{ $img = $img.Filter (images.Overlay (.Process &#34;fit 300x250&#34;) 875 38 )}} {{ end }} {{ end }} {{ $img = $img.Filter (images.Text $text (dict &#34;color&#34; &#34;#d8d8d8&#34; &#34;size&#34; 64 &#34;linespacing&#34; 2 &#34;x&#34; 40 &#34;y&#34; 300 &#34;font&#34; $font ))}} {{ $img = resources.Copy (path.Join $.Page.RelPermalink &#34;og.png&#34;) $img }} &lt;!-- [tl! collapse:end ] --&gt; I can test that this works by running a build locally from a system with access to my tailnet. I'm not going to start a web server with this build; I'll just review the contents of the public/ folder once it's complete to see if the OpenGraph images got rendered correctly.
HUGO_REMOTE_FONT_PATH=https://node.tailnet-name.ts.net/fonts/BerkeleyMono.ttf hugo Neat, it worked!
GitHub Action All that's left is to update the GitHub Actions workflow I use for building and deploying my site to Neocities to automate things:
# torchlight! {&#34;lineNumbers&#34;: true} # .github/workflows/deploy-to-neocities.yml name: Deploy to Neocities # [tl! collapse:start] on: schedule: - cron: 0 13 * * * push: branches: - main concurrency: group: deploy-to-neocities cancel-in-progress: true defaults: run: shell: bash # [tl! collapse:end] jobs: deploy: name: Build and deploy Hugo site runs-on: ubuntu-latest steps: - name: Hugo setup uses: peaceiris/[email protected] with: hugo-version: &#39;0.121.1&#39; extended: true - name: Checkout uses: actions/checkout@v4 with: submodules: recursive - name: Connect to Tailscale # [tl! ++:10 **:10] uses: tailscale/github-action@v2 with: oauth-client-id: \${{ secrets.TS_API_CLIENT_ID }} oauth-secret: \${{ secrets.TS_API_CLIENT_SECRET }} tags: \${{ secrets.TS_TAG }} - name: Build with Hugo run: hugo --minify # [tl! -- **] run: HUGO_REMOTE_FONT_PATH=\${{ secrets.REMOTE_FONT_PATH }} hugo --minify # [tl! ++ ** reindex(-1) ] - name: Insert 404 page run: | cp public/404/index.html public/not_found.html - name: Highlight with Torchlight run: | npm i @torchlight-api/torchlight-cli npx torchlight - name: Deploy to Neocities uses: bcomnes/deploy-to-neocities@v1 with: api_token: \${{ secrets.NEOCITIES_API_TOKEN }} cleanup: true dist_dir: public This uses the Tailscale GitHub Action↗ to connect the runner to my tailnet using the credentials I created earlier, and passes the REMOTE_FONT_PATH secret as an environment variable to the Hugo command line. Hugo will then be able to retrieve and use the TTF font during the build process.
Conclusion Configuring and using a custom font in my Hugo-generated site wasn't hard to do, but I had to figure some things out on my own to get started in the right direction. I learned a lot about how fonts are managed in CSS along the way, and I love the way the new font looks on this site!
This little project also gave me an excuse to play with first Cloudflare R2 and then Bunny Storage, and I came away seriously impressed by Bunny (and have since moved more of my domains to bunny.net). Expect me to write more about cool Bunny stuff in the future.
`,description:"Installing a custom font on a Hugo site, and taking steps to protect the paid font files from unauthorized distribution. Plus a brief exploration of a pair of storage CDNs, and using Tailscale in a GitHub Actions workflow.",url:"https://runtimeterror.dev/using-custom-font-hugo/"},"https://runtimeterror.dev/blocking-ai-crawlers/":{title:"Blocking AI Crawlers",tags:["cloud","cloudflare","hugo","meta","selfhosting"],content:`I've seen some recent posts from folks like Cory Dransfeldt↗ and Ethan Marcotte↗ about how (and why) to prevent your personal website from being slurped up by the crawlers that AI companies use to actively enshittify the internet↗. I figured it was past time for me to hop on board with this, so here we are.
My initial approach was to use Hugo's robots.txt templating↗ to generate a robots.txt file based on a list of bad bots I got from ai.robots.txt on GitHub↗.
I dumped that list into my config/params.toml file, above any of the nested elements (since toml is kind of picky about that...).
robots = [ &#34;AdsBot-Google&#34;, &#34;Amazonbot&#34;, &#34;anthropic-ai&#34;, &#34;Applebot-Extended&#34;, &#34;AwarioRssBot&#34;, &#34;AwarioSmartBot&#34;, &#34;Bytespider&#34;, &#34;CCBot&#34;, &#34;ChatGPT&#34;, &#34;ChatGPT-User&#34;, &#34;Claude-Web&#34;, &#34;ClaudeBot&#34;, &#34;cohere-ai&#34;, &#34;DataForSeoBot&#34;, &#34;Diffbot&#34;, &#34;FacebookBot&#34;, &#34;Google-Extended&#34;, &#34;GPTBot&#34;, &#34;ImagesiftBot&#34;, &#34;magpie-crawler&#34;, &#34;omgili&#34;, &#34;Omgilibot&#34;, &#34;peer39_crawler&#34;, &#34;PerplexityBot&#34;, &#34;YouBot&#34; ] I then created a new template in layouts/robots.txt:
Sitemap: {{ .Site.BaseURL }}/sitemap.xml # hello robots [^_^] # let&#39;s be friends &lt;3 User-agent: * Disallow: # except for these bots which are not friends: {{ range .Site.Params.bad_robots }} User-agent: {{ . }} {{- end }} Disallow: / And enabled the template processing for this in my config/hugo.toml file:
enableRobotsTXT = true Now Hugo will generate the following robots.txt file for me:
Sitemap: https://runtimeterror.dev/sitemap.xml # hello robots [^_^] # let&#39;s be friends &lt;3 User-agent: * Disallow: # except for these bots which are not friends: User-agent: AdsBot-Google User-agent: Amazonbot User-agent: anthropic-ai User-agent: Applebot-Extended User-agent: AwarioRssBot User-agent: AwarioSmartBot User-agent: Bytespider User-agent: CCBot User-agent: ChatGPT User-agent: ChatGPT-User User-agent: Claude-Web User-agent: ClaudeBot User-agent: cohere-ai User-agent: DataForSeoBot User-agent: Diffbot User-agent: FacebookBot User-agent: Google-Extended User-agent: GPTBot User-agent: ImagesiftBot User-agent: magpie-crawler User-agent: omgili User-agent: Omgilibot User-agent: peer39_crawler User-agent: PerplexityBot User-agent: YouBot Disallow: / Cool!
I also dropped the following into static/ai.txt for good measure↗:
# Spawning AI # Prevent datasets from using the following file types User-Agent: * Disallow: / Disallow: * That's all well and good, but these files carry all the weight and authority of a &quot;No Soliciting&quot; sign. Do I really trust these bots to honor it?
I'm hosting this site on Neocities, and Neocities unfortunately (though perhaps wisely) doesn't give me control of the web server there. But the site is fronted by Cloudflare, and that does give me a lot of options for blocking stuff I don't want.
So I added a WAF Custom Rule↗ to block those unwanted bots. (I could have used their User Agent Blocking↗ to accomplish the same, but you can only set 10 of those on the free tier. I can put all the user agents together in a single WAF Custom Rule.)
Here's the expression I'm using:
(http.user_agent contains &#34;AdsBot-Google&#34;) or (http.user_agent contains &#34;Amazonbot&#34;) or (http.user_agent contains &#34;anthropic-ai&#34;) or (http.user_agent contains &#34;Applebot-Extended&#34;) or (http.user_agent contains &#34;AwarioRssBot&#34;) or (http.user_agent contains &#34;AwarioSmartBot&#34;) or (http.user_agent contains &#34;Bytespider&#34;) or (http.user_agent contains &#34;CCBot&#34;) or (http.user_agent contains &#34;ChatGPT-User&#34;) or (http.user_agent contains &#34;ClaudeBot&#34;) or (http.user_agent contains &#34;Claude-Web&#34;) or (http.user_agent contains &#34;cohere-ai&#34;) or (http.user_agent contains &#34;DataForSeoBot&#34;) or (http.user_agent contains &#34;FacebookBot&#34;) or (http.user_agent contains &#34;Google-Extended&#34;) or (http.user_agent contains &#34;GoogleOther&#34;) or (http.user_agent contains &#34;GPTBot&#34;) or (http.user_agent contains &#34;ImagesiftBot&#34;) or (http.user_agent contains &#34;magpie-crawler&#34;) or (http.user_agent contains &#34;Meltwater&#34;) or (http.user_agent contains &#34;omgili&#34;) or (http.user_agent contains &#34;omgilibot&#34;) or (http.user_agent contains &#34;peer39_crawler&#34;) or (http.user_agent contains &#34;peer39_crawler/1.0&#34;) or (http.user_agent contains &#34;PerplexityBot&#34;) or (http.user_agent contains &#34;Seekr&#34;) or (http.user_agent contains &#34;YouBot&#34;) And checking on that rule ~24 hours later, I can see that it's doing some good:
See ya, AI bots!
`,description:"Using Hugo to politely ask AI bots to not steal my content - and then configuring Cloudflare's WAF to actively block them, just to be sure.",url:"https://runtimeterror.dev/blocking-ai-crawlers/"},"https://runtimeterror.dev/gemini-capsule-gempost-github-actions/":{title:"Self-Hosted Gemini Capsule with gempost and GitHub Actions",tags:["caddy","cicd","docker","selfhosting","tailscale"],content:`I've recently been exploring some indieweb/smolweb technologies, and one of the most interesting things I've come across is Project Gemini↗:
Gemini is a new internet technology supporting an electronic library of interconnected text documents. That's not a new idea, but it's not old fashioned either. It's timeless, and deserves tools which treat it as a first class concept, not a vestigial corner case. Gemini isn't about innovation or disruption, it's about providing some respite for those who feel the internet has been disrupted enough already. We're not out to change the world or destroy other technologies. We are out to build a lightweight online space where documents are just documents, in the interests of every reader's privacy, attention and bandwidth.
I thought it was an interesting idea, so after a bit of experimentation with various hosted options I created a self-hosted Gemini capsule (Gemini for &quot;web site&quot;) to host a lightweight text-focused Gemlog (&quot;weblog&quot;)↗. After further tinkering, I arranged to serve the capsule both on the Gemini network as well as the traditional HTTP-based web, and I set up a GitHub Actions workflow to handle posting updates. This post will describe how I did that.
Gemini Server: Agate There are a number of different Gemini server applications↗ to choose from. I decided to use Agate↗, not just because it was at the top of the Awesome Gemini list but also because seems to be widely recommended, regularly updated, and easy to use. Plus it will automatically generates certs for me, which is nice since Gemini requires valid certificates for all connections.
I wasn't able to find a pre-built Docker image for Agate (at least not one which seemed to be actively maintained), but I found that I could easily make my own by just installing Agate on top of the standard Rust image. So I came up with this Dockerfile:
# torchlight! {&#34;lineNumbers&#34;: true} FROM rust:latest RUN cargo install agate WORKDIR /var/agate ENTRYPOINT [&#34;agate&#34;] This very simply uses the Rust package manager↗ to install agate, change to an appropriate working directory, and start the agate executable.
And then I can create a basic docker-compose.yaml for it:
# torchlight! {&#34;lineNumbers&#34;: true} services: agate: restart: always build: . container_name: agate volumes: - ./content:/var/agate/content - ./certs:/var/agate/certs ports: - &#34;1965:1965&#34; command: &gt; --content content --certs certs --addr 0.0.0.0:1965 --hostname capsule.jbowdre.lol --lang en-US This mounts local directories ./content and ./certs into the /var/agate/ working directory, passes arguments specifying those directories as well as the hostname to the agate executable, and exposes the service on port 1965 (commemorating the first manned Gemini missions↗ - I love the space nerd influences here!).
Now I'll throw some quick Gemtext↗ into a file in the ./content directory:
mkdir -p content # [tl! .cmd] cat &lt;&lt;&lt; EOF &gt; content/hello-gemini.gmi # [tl! .cmd] # This is a test of the Gemini broadcast system. * This is only a test. =&gt; gemini://geminiprotocol.net/history/ Gemini History EOF After configuring the capsule.jbowdre.lol DNS record and opening port 1965 on my server's firewall, I can use docker compose up -d to spawn the server and begin serving my mostly-empty capsule at gemini://capsule.jbowdre.lol/hello-gemini.gmi. I can then check it out in a popular Gemini client called Lagrange↗:
Hooray, I have an outpost in Geminispace! Let's dress it up a little bit now.
Gemini Content: gempost A &quot;hello world&quot; post will only hold someone's interest for so long. I wanted to start using my capsule for lightweight blogging tasks, but I didn't want to have to manually keep track of individual posts or manage consistent formatting. After a bit of questing, I found gempost↗, a static site generator for gemlogs (Gemini blogs). It makes it easy to template out index pages and insert headers/footers, and supports tracking post metadata in a yaml sidecar for each post (handy since gemtext doesn't support front matter or other inline meta properties).
gempost is installed as a Rust package, so I could have installed Rust and then used cargo install gempost to install gempost. But I've been really getting into using project-specific development shells powered by Nix flakes↗ and direnv↗ to automatically load/unload packages as I traverse the directory tree. So I put together this flake↗ to activate Agate, Rust, and gempost when I change into the directory where I want to build my capsule content:
# torchlight! {&#34;lineNumbers&#34;: true} { description = &#34;gemsite build environment&#34;; inputs = { nixpkgs.url = &#34;github:nixos/nixpkgs/nixos-unstable&#34;; }; outputs = { self, nixpkgs }: let pkgs = import nixpkgs { system = &#34;x86_64-linux&#34;; }; gempost = pkgs.rustPlatform.buildRustPackage rec { pname = &#34;gempost&#34;; version = &#34;v0.3.0&#34;; src = pkgs.fetchFromGitHub { owner = &#34;justlark&#34;; repo = pname; rev = version; hash = &#34;sha256-T6CP1blKXik4AzkgNJakrJyZDYoCIU5yaxjDvK3p01U=&#34;; }; cargoHash = &#34;sha256-jG/G/gmaCGpqXeRQX4IqV/Gv6i/yYUpoTC0f7fMs4pQ=&#34;; }; in { devShells.x86_64-linux.default = pkgs.mkShell { packages = with pkgs; [ agate gempost ]; }; }; } Then I just need to tell direnv to load the flake, by dropping this into .envrc:
# torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env direnv use flake . Now when I cd into the directory where I'm managing the content for my capsule, the appropriate environment gets loaded automagically:
which gempost # [tl! .cmd] cd capsule/ # [tl! .cmd] direnv: loading ~/projects/capsule/.envrc # [tl! .nocopy:3] direnv: using flake . direnv: nix-direnv: using cached dev shell # [tl! collapse:1] direnv: export +AR +AS +CC +CONFIG_SHELL +CXX +HOST_PATH +IN_NIX_SHELL +LD +NIX_BINTOOLS +NIX_BINTOOLS_WRAPPER_TARGET_HOST_x86_64_unknown_linux_gnu +NIX_BUILD_CORES +NIX_CC +NIX_CC_WRAPPER_TARGET_HOST_x86_64_unknown_linux_gnu +NIX_CFLAGS_COMPILE +NIX_ENFORCE_NO_NATIVE +NIX_HARDENING_ENABLE +NIX_LDFLAGS +NIX_STORE +NM +OBJCOPY +OBJDUMP +RANLIB +READELF +SIZE +SOURCE_DATE_EPOCH +STRINGS +STRIP +__structuredAttrs +buildInputs +buildPhase +builder +cmakeFlags +configureFlags +depsBuildBuild +depsBuildBuildPropagated +depsBuildTarget +depsBuildTargetPropagated +depsHostHost +depsHostHostPropagated +depsTargetTarget +depsTargetTargetPropagated +doCheck +doInstallCheck +dontAddDisableDepTrack +mesonFlags +name +nativeBuildInputs +out +outputs +patches +phases +preferLocalBuild +propagatedBuildInputs +propagatedNativeBuildInputs +shell +shellHook +stdenv +strictDeps +system ~PATH ~XDG_DATA_DIRS which gempost # [tl! .cmd] /nix/store/1jbfsb0gm1pr1ijc8304jqak7iamjybi-gempost-v0.3.0/bin/gempost # [tl! .nodopy] Neat!
I'm not going to go into too much detail on how I configured/use gempost; the readme↗ does a great job of explaining what's needed to get started, and the examples directory↗ shows some of the flexibility and configurable options. Perhaps the biggest change I made is using gemlog/ as the gemlog directory (instead of posts/). You can check out my gempost config file↗ and/or gempost templates↗ for the full details.
In any case, after gempost is configured and I've written some posts, I can build the capsule with gempost build. It spits out the &quot;rendered&quot; gemtext (basically generating index pages and the Atom feed from the posts and templates) in the public/ directory.
If I copy the contents of that public/ directory into my Agate content/ directory, I'll be serving this new-and-improved Gemini capsule. But who wants to be manually running gempost build and copying files around to publish them? Not this guy. I'll set up a GitHub Actions workflow to take care of that for me.
But first, let's do a little bit more work to make this capsule available on the traditional HTTP-powered world wide web.
Web Proxy: kineto I looked at a few different options for making Gemini content available on the web. I thought about finding/writing a script to convert geminitext to HTML and just serving that directly. I saw a few alternative Gemini servers which seemed able to cross-host content. But the simplest solution I saw (and one that I've seen used on quite a few other sites) is a Gemini-to-HTTP proxy called kineto↗.
kineto is distributed as a golang package so that's pretty easy to build and install in a Docker image:
# torchlight! {&#34;lineNumbers&#34;:true} FROM golang:1.22 as build WORKDIR /build RUN git clone https://git.sr.ht/~sircmpwn/kineto WORKDIR /build/kineto RUN go mod download \\ &amp;&amp; CGO_ENABLED=0 GOOS=linux go build -o kineto FROM alpine:3.19 WORKDIR /app COPY --from=build /build/kineto/kineto /app/kineto ENTRYPOINT [&#34;/app/kineto&#34;] I created a slightly-tweaked style.css based on kineto's default styling↗ to tailor the appearance a bit more to my liking. (I've got some more work to do to make it look &quot;good&quot;, but I barely know how to spell CSS so I'll get to that later.)
/* torchlight! {&#34;lineNumbers&#34;:true} */ html { /* [tl! collapse:start] */ font-family: sans-serif; color: #080808; } body { max-width: 920px; margin: 0 auto; padding: 1rem 2rem; } blockquote { background-color: #eee; border-left: 3px solid #444; margin: 1rem -1rem 1rem calc(-1rem - 3px); padding: 1rem; } ul { margin-left: 0; padding: 0; } li { padding: 0; } li:not(:last-child) { margin-bottom: 0.5rem; } /* [tl! collapse:end] */ a { position: relative; text-decoration: dotted underline; /* [tl! ++:start] */ } a:hover { text-decoration: solid underline; /* [tl! ++:end] */ } /* [tl! collapse:start] */ a:before { content: &#39;⇒&#39;; color: #999; text-decoration: none; font-weight: bold; position: absolute; left: -1.25rem; } pre { background-color: #eee; margin: 0 -1rem; padding: 1rem; overflow-x: auto; } details:not([open]) summary, details:not([open]) summary a { color: gray; } details summary a:before { display: none; } dl dt { font-weight: bold; } dl dt:not(:first-child) { margin-top: 0.5rem; } /* [tl! collapse:end] */ @media(prefers-color-scheme:dark) { /* [tl! collapse:start] */ html { background-color: #111; color: #eee; } blockquote { background-color: #000; } pre { background-color: #222; } a { color: #0087BD; } /* [tl! collapse:end] */ a:visited { color: #333399; /* [tl! --] */ color: #006ebd; /* [tl! ++ reindex(-1)] */ } } /* [tl! collapse:start] */ label { display: block; font-weight: bold; margin-bottom: 0.5rem; } input { display: block; border: 1px solid #888; padding: .375rem; line-height: 1.25rem; transition: border-color .15s ease-in-out,box-shadow .15s ease-in-out; width: 100%; } input:focus { outline: 0; border-color: #80bdff; box-shadow: 0 0 0 0.2rem rgba(0,123,255,.25); } /* [tl! collapse:end] */ And then I crafted a docker-compose.yaml for my kineto instance:
# torchlight! {&#34;lineNumbers&#34;:true} services: kineto: restart: always build: . container_name: kineto ports: - &#34;8081:8080&#34; volumes: - ./style.css:/app/style.css command: -s style.css gemini://capsule.jbowdre.lol Port 8080 was already in use for another service on this system, so I had to expose this on 8081 instead. And I used the -s argument to kineto to load my customized CSS, and then pointed the proxy at the capsule's Gemini address.
I started this container with docker compose up -d... but the job wasn't quite done. This server is already using Caddy webserver↗ for serving some stuff, so I'll use that for fronting kineto as well. That way, the content can be served over the typical HTTPS port and Caddy can manage the certificate for me too.
So I added this to my /etc/caddy/Caddyfile:
capsule.jbowdre.lol { reverse_proxy localhost:8081 } And bounced the Caddy service.
Now my capsule is available at both gemini://capsule.jbowdre.lol and https://capsule.jbowdre.lol. Agate handles generating the Gemini content, and kineto is able to convert that to web-friendly HTML on the fly. It even handles embedded gemini:// links pointing to other capsules, allowing legacy web users to explore bits of Geminispace from the comfort of their old-school browsers.
One thing that kineto doesn't translate, though, are the contents of the atom feed at public/gemlog/atom.xml. Whether it's served via Gemini or HTTPS, the contents are static. For instance:
&lt;entry&gt; &lt;id&gt;urn:uuid:a751b018-cda5-4c03-bd9d-16bdc1506050&lt;/id&gt; &lt;title&gt;Hello Gemini&lt;/title&gt; &lt;summary&gt;I decided to check out Geminispace. You&#39;re looking at my first exploration.&lt;/summary&gt; &lt;published&gt;2024-03-05T17:00:00-06:00&lt;/published&gt; &lt;updated&gt;2024-03-06T07:45:00-06:00&lt;/updated&gt; &lt;link rel=&#34;alternate&#34; href=&#34;gemini://capsule.jbowdre.lol/gemlog/2024-03-05-hello-gemini.gmi&#34;/&gt; &lt;!-- [tl! **] --&gt; &lt;/entry&gt; No worries, I can throw together a quick script that will generate a web-friendly feed:
# torchlight! {&#34;lineNumbers&#34;:true} #!/usr/bin/env bash # generate-web-feed.sh set -eu INFEED=&#34;public/gemlog/atom.xml&#34; OUTFEED=&#34;public/gemlog/atom-web.xml&#34; # read INFEED, replace gemini:// with https:// and write to OUTFEED sed &#39;s/gemini:\\/\\//https:\\/\\//g&#39; $INFEED &gt; $OUTFEED # fix self url sed -i &#39;s/atom\\.xml/atom-web\\.xml/g&#39; $OUTFEED After running that, I have a public/gemlog/atom-web.xml file that I can serve for web visitors, and it correctly points to the https:// endpoint so that feed readers won't get confused:
&lt;entry&gt; &lt;id&gt;urn:uuid:a751b018-cda5-4c03-bd9d-16bdc1506050&lt;/id&gt; &lt;title&gt;Hello Gemini&lt;/title&gt; &lt;summary&gt;I decided to check out Geminispace. You&#39;re looking at my first exploration.&lt;/summary&gt; &lt;published&gt;2024-03-05T17:00:00-06:00&lt;/published&gt; &lt;updated&gt;2024-03-06T07:45:00-06:00&lt;/updated&gt; &lt;link rel=&#34;alternate&#34; href=&#34;https://capsule.jbowdre.lol/gemlog/2024-03-05-hello-gemini.gmi&#34; /&gt; &lt;!-- [tl! ** ] --&gt; &lt;/entry&gt; Let's sort out an automated build process now...
Publish: GitHub Actions To avoid having to manually publish my capsule by running gempost build and copying the result into the server's content/ directory, I'm going to automate that process with a GitHub Actions workflow. It will execute the build process in a GitHub runner and then shift the result to my server. I'll make use of Tailscale↗ to easily establish an rsync-ssh connection between the runner and the server without having to open up any ports publicly.
I'll start by creating a new GitHub repo for managing my capsule content; let's call it github.com/jbowdre/capsule↗. I'll clone it locally so I can work in it on my laptop and then push changes upstream when I'm ready. So I'll copy/move over all the gempost files I've worked on so far, along with a copy of the Docker configurations for Agate and kineto just so I've got those visible in one place:
. ├── .certificates ├── .direnv ├── agate ├── gemlog ├── kineto ├── public ├── static ├── templates ├── .envrc ├── .gitignore ├── flake.lock ├── flake.nix ├── gempost.yaml └── generate-web-feed.sh I don't need to sync .direnv/ (which holds the local direnv state), public/ (which holds the gempost output - I'll be building that in GitHub shortly), or .certificates/ (which was automatically created when testing agate locally with agate --content public --hostname localhost), so I'll add those to .gitignore:
/.certificates/ /.direnv/ /public/ Then I can create .github/workflows/deploy-gemini.yml to manage the deployment, and I start that by defining when it should be triggered:
only on pushes to the main branch, and only when one of the gempost-related files/directories are included in that push. It will also set some other sane defaults for the workflow:
# torchlight! {&#34;lineNumbers&#34;:true} name: Deploy Gemini Capsule # only run on changes to main on: workflow_dispatch: push: branches: - main paths: - &#39;gemlog/**&#39; - &#39;static/**&#39; - &#39;templates/**&#39; - &#39;gempost.yaml&#39; concurrency: # prevent concurrent deploys doing strange things group: deploy-gemini-capsule cancel-in-progress: true # Default to bash defaults: run: shell: bash I'll then define my first couple of jobs to:
use the cargo-install Action↗ to install gempost, checkout the current copy of my capsule repo, run the gempost build process, and then call my generate-web-feed.sh script. # torchlight! {&#34;lineNumbers&#34;:true} jobs: # [tl! reindex(24)] deploy: name: Deploy Gemini Capsule runs-on: ubuntu-latest steps: - name: Install gempost uses: baptiste0928/[email protected] with: crate: gempost - name: Checkout uses: actions/checkout@v4 - name: Gempost build run: gempost build - name: Generate web feed run: ./generate-web-feed.sh Next, I'll use the Tailscale Action↗ to let the runner connect to my Tailnet. It will use an OAuth client↗ to authenticate the new node, and will apply an ACL tag↗ which grants access only to my web server.
# torchlight! {&#34;lineNumbers&#34;:true} - name: Connect to Tailscale # [tl! reindex(39)] uses: tailscale/github-action@v2 with: oauth-client-id: \${{ secrets.TS_API_CLIENT_ID }} oauth-secret: \${{ secrets.TS_API_CLIENT_SECRET }} tags: \${{ secrets.TS_TAG }} I'll let Tailscale SSH↗ facilitate the authentication for the rsync-ssh login, but I need to pre-stage ~/.ssh/known_hosts on the runner so that the client will know it can trust the server:
# torchlight! {&#34;lineNumbers&#34;:true} - name: Configure SSH known hosts # [tl! reindex(45)] run: | mkdir -p ~/.ssh echo &#34;\${{ secrets.SSH_KNOWN_HOSTS }}&#34; &gt; ~/.ssh/known_hosts chmod 644 ~/.ssh/known_hosts And finally, I'll use rsync to transfer the contents of the gempost-produced public/ folder to the Agate content/ folder on the remote server:
# torchlight! {&#34;lineNumbers&#34;:true} - name: Deploy GMI to Agate # [tl! reindex(50)] run: | rsync -avz --delete -e ssh public/ deploy@\${{ secrets.GMI_HOST }}:\${{ secrets.GMI_CONTENT_PATH }} You can view the workflow in its entirety on GitHub↗. But before I can actually use it, I'll need to configure Tailscale, set up the server, and safely stash those secrets.* values in the repo.
Tailscale configuration I want to use a pair of ACL tags to identify the GitHub runner and the server, and ensure that the runner can connect to the server over port 22. To create a new tag in a Tailscale ACL, I need to assign the permission, so I'll add this to my ACL file:
&#34;tagOwners&#34;: { &#34;tag:gh-bld&#34;: [&#34;group:admins&#34;], // github builder &#34;tag:gh-srv&#34;: [&#34;group:admins&#34;], // server it can deploy to }, Then I'll add this to the acls block:
&#34;acls&#34;: [ { // github runner can talk to the deployment target &#34;action&#34;: &#34;accept&#34;, &#34;users&#34;: [&#34;tag:gh-bld&#34;], &#34;ports&#34;: [ &#34;tag:gh-srv:22&#34; ], } ], And I'll use this to configure Tailscale SSH↗ to define that the runner can log in to the web server as the deploy user using a keypair automagically managed by Tailscale:
&#34;ssh&#34;: [ { &#34;action&#34;: &#34;accept&#34;, &#34;src&#34;: [&#34;tag:gh-bld&#34;], &#34;dst&#34;: [&#34;tag:gh-srv&#34;], &#34;users&#34;: [&#34;deploy&#34;], } ], To generate the OAuth client, I'll head to Tailscale Admin Console &gt; Settings &gt; OAuth Clients↗ and click the Generate OAuth Client button. The new client only needs to have the ability to read and write new devices, and I set it to assign the tag:gh-bld tag:
I take the Client ID and Client Secret store those in the GitHub Repo as Actions Repository Secrets named TS_API_CLIENT_ID and TS_API_CLIENT_SECRET. I also save the tag as TS_TAG.
Server configuration From my previous manual experimentation, I have the Docker configuration for the Agate server hanging out at /opt/gemini/, and kineto is on the same server at /opt/kineto/ (though it doesn't really matter where kineto actually lives - it can proxy any gemini:// address from anywhere).
I'm actually using Agate's support for Virtual Hosts↗ to serve both the capsule I've been discussing in this post as well as a version of this very site available in Geminispace at gemini://gmi.runtimeterror.dev. By passing multiple hostnames to the agate command line, it looks for hostname-specific subdirs inside of the content and certs folders. So my production setup looks like this:
. ├── certs │ ├── capsule.jbowdre.lol │ └── gmi.runtimeterror.dev ├── content │ ├── capsule.jbowdre.lol │ └── gmi.runtimeterror.dev ├── docker-compose.yaml └── Dockerfile And the command: line of docker-compose.yaml looks like this:
command: &gt; --content content --certs certs --addr 0.0.0.0:1965 --hostname gmi.runtimeterror.dev --hostname capsule.jbowdre.lol --lang en-US I told the Tailscale ACL that the runner should be able to log in to the server as the deploy user, so I guess I'd better create that account on the server:
sudo useradd -U -m -s /usr/bin/bash deploy # [tl! .cmd] And I want to make that user the owner of the content directories so that it can use rsync to overwrite them when changes are made:
sudo chown -R deploy:deploy /opt/gemini/content/capsule.jbowdre.lol # [tl! .cmd:1] sudo chown -R deploy:deploy /opt/gemini/content/gmi.runtimeterror.dev I should also make sure that the server bears the appropriate Tailscale tag and that it has Tailscale SSH enabled:
sudo tailscale up --advertise-tags=&#34;tag:gh-srv&#34; --ssh I can use the ssh-keyscan utility from another system to retrieve the server's SSH public keys so they can be added to the runner's known_hosts file:
ssh-keyscan -H $servername # [tl! .cmd] I'll copy the entirety of that output and save it as a repository secret named SSH_KNOWN_HOSTS. I'll also save the Tailnet node name as GMI_HOST and the full path to the capsule's content directory (/opt/gemini/content/capsule.jbowdre.lol/) as GMI_CONTENT_PATH.
Deploy! All the pieces are in place, and all that's left is to git commit and git push the repo I've been working on. With a bit of luck (and a lot of failures lessons learned), GitHub shows a functioning deployment!
And the capsule is live at both https://capsule.jbowdre.lol and gemini://capsule.jbowdre.lol:
Come check it out!
My Capsule on Gemini↗ My Capsule on the web↗ `,description:"Deploying a Gemini capsule, powered by Agate, gempost, kineto, Tailscale, and GitHub Actions",url:"https://runtimeterror.dev/gemini-capsule-gempost-github-actions/"},"https://runtimeterror.dev/dynamic-opengraph-images-with-hugo/":{title:"Dynamically Generating OpenGraph Images With Hugo",tags:["hugo","cicd","meta","selfhosting"],content:`I've lately seen some folks on social.lol↗ posting about their various strategies for automatically generating Open Graph images↗ for their Eleventy↗ sites. So this weekend I started exploring how I could do that for my Hugo↗ site1.
During my search, I came across a few different approaches using external services or additional scripts to run at build time, but I was hoping for a way to do this with Hugo's built-in tooling. I eventually came across a tremendously helpful post from Aaro↗ titled Generating OpenGraph images with Hugo↗. This solution was exactly what I was after, as it uses Hugo's image functions↗ to dynamically create a share image for each page.
I ended up borrowing heavily from Aaro's approach while adding a few small variations for my OpenGraph images.
When sharing the home page, the image includes the site description. When sharing a post, the image includes the post title. ... and if the post has a thumbnail2 listed in the front matter, that gets overlaid in the corner. Here's how I did it.
New resources Based on Aaro's suggestions, I used GIMP↗ to create a 1200x600 image for the base. I'm not a graphic designer3 so I kept it simple while trying to match the site's theme.
I had to install the Fira Mono font Fira Mono .ttf↗ to my ~/.fonts/ folder so I could use it in GIMP, and I wound up with a decent recreation of the little &quot;logo&quot; at the top of the page.
That fits with the vibe of the site, and leaves plenty of room for text to be added to the image.
I also wanted to use that font later for the text overlay, so I stashed both of those resources in my assets/ folder:
OpenGraph partial Hugo uses an internal template↗ for rendering OpenGraph properties by default. I needed to import that as a partial so that I could override its behavior. So I dropped the following in layouts/partials/opengraph.html as a starting point:
// torchlight! {&#34;lineNumbers&#34;: true} &lt;meta property=&#34;og:title&#34; content=&#34;{{ .Title }}&#34; /&gt; &lt;meta property=&#34;og:description&#34; content=&#34;{{ with .Description }}{{ . }}{{ else }}{{if .IsPage}}{{ .Summary }}{{ else }}{{ with .Site.Params.description }}{{ . }}{{ end }}{{ end }}{{ end }}&#34; /&gt; &lt;meta property=&#34;og:type&#34; content=&#34;{{ if .IsPage }}article{{ else }}website{{ end }}&#34; /&gt; &lt;meta property=&#34;og:url&#34; content=&#34;{{ .Permalink }}&#34; /&gt; &lt;meta property=&#34;og:locale&#34; content=&#34;{{ .Lang }}&#34; /&gt; {{- if .IsPage }} {{- $iso8601 := &#34;2006-01-02T15:04:05-07:00&#34; -}} &lt;meta property=&#34;article:section&#34; content=&#34;{{ .Section }}&#34; /&gt; {{ with .PublishDate }}&lt;meta property=&#34;article:published_time&#34; {{ .Format $iso8601 | printf &#34;content=%q&#34; | safeHTMLAttr }} /&gt;{{ end }} {{ with .Lastmod }}&lt;meta property=&#34;article:modified_time&#34; {{ .Format $iso8601 | printf &#34;content=%q&#34; | safeHTMLAttr }} /&gt;{{ end }} {{- end -}} {{- with .Params.audio }}&lt;meta property=&#34;og:audio&#34; content=&#34;{{ . }}&#34; /&gt;{{ end }} {{- with .Params.locale }}&lt;meta property=&#34;og:locale&#34; content=&#34;{{ . }}&#34; /&gt;{{ end }} {{- with .Site.Params.title }}&lt;meta property=&#34;og:site_name&#34; content=&#34;{{ . }}&#34; /&gt;{{ end }} {{- with .Params.videos }}{{- range . }} &lt;meta property=&#34;og:video&#34; content=&#34;{{ . | absURL }}&#34; /&gt; {{ end }}{{ end }} To use this new partial, I added it to my layouts/partials/head.html:
{{ partial &#34;opengraph&#34; . }} which is in turn loaded by layouts/_defaults/baseof.html:
&lt;head&gt; {{- partial &#34;head.html&#34; . -}} &lt;/head&gt; So now the customized OpenGraph content will be loaded for each page.
Aaro's OG image generation Aaro's code↗ provided the base functionality for what I needed:
{{/* Generate opengraph image */}} {{- if .IsPage -}} {{ $base := resources.Get &#34;og_base.png&#34; }} {{ $boldFont := resources.Get &#34;/Inter-SemiBold.ttf&#34;}} {{ $mediumFont := resources.Get &#34;/Inter-Medium.ttf&#34;}} {{ $img := $base.Filter (images.Text .Site.Title (dict &#34;color&#34; &#34;#ffffff&#34; &#34;size&#34; 52 &#34;linespacing&#34; 2 &#34;x&#34; 141 &#34;y&#34; 117 &#34;font&#34; $boldFont ))}} {{ $img = $img.Filter (images.Text .Page.Title (dict &#34;color&#34; &#34;#ffffff&#34; &#34;size&#34; 64 &#34;linespacing&#34; 2 &#34;x&#34; 141 &#34;y&#34; 291 &#34;font&#34; $mediumFont ))}} {{ $img = resources.Copy (path.Join .Page.RelPermalink &#34;og.png&#34;) $img }} &lt;meta property=&#34;og:image&#34; content=&#34;{{$img.Permalink}}&#34;&gt; &lt;meta property=&#34;og:image:width&#34; content=&#34;{{$img.Width}}&#34; /&gt; &lt;meta property=&#34;og:image:height&#34; content=&#34;{{$img.Height}}&#34; /&gt; &lt;!-- Twitter metadata (used by other websites as well) --&gt; &lt;meta name=&#34;twitter:card&#34; content=&#34;summary_large_image&#34; /&gt; &lt;meta name=&#34;twitter:title&#34; content=&#34;{{ .Title }}&#34; /&gt; &lt;meta name=&#34;twitter:description&#34; content=&#34;{{ with .Description }}{{ . }}{{ else }}{{if .IsPage}}{{ .Summary }}{{ else }}{{ with .Site.Params.description }}{{ . }}{{ end }}{{ end }}{{ end -}}&#34;/&gt; &lt;meta name=&#34;twitter:image&#34; content=&#34;{{$img.Permalink}}&#34; /&gt; {{ end }} The resources.Get↗ bits import the image and font resources to make them available to the images.Text↗ functions, which add the site and page title texts to the image using the designated color, size, placement, and font.
The resources.Copy line moves the generated OG image alongside the post itself and gives it a clean og.png name rather than the very-long randomly-generated name it would have by default.
And then the &lt;meta /&gt; lines insert the generated image into the page's &lt;head&gt; block so it can be rendered when the link is shared on sites which support OpenGraph.
This is a great starting point for what I wanted to accomplish, but I made some changes to my opengraph.html partial to tailor it to my needs.
My tweaks As I mentioned earlier, I wanted to have three slightly-different recipes for baking my OG images: one for the homepage, one for standard posts, and one for posts with an associated thumbnail. They all use the same basic code, though, so I wanted to be sure that my setup didn't repeat itself too much.
My code starts with fetching my resources up front, and initializing an empty $text variable to hold either the site description or post title:
{{ $img := resources.Get &#34;og_base.png&#34; }} {{ $font := resources.Get &#34;/FiraMono-Regular.ttf&#34; }} {{ $text := &#34;&#34; }} For the site homepage, I set $text to hold the site description:
{{- if .IsHome }} {{ $text = .Site.Params.Description }} {{- end }} On standard post pages, I used the page title instead:
{{- if .IsPage }} {{ $text = .Page.Title }} {{ end }} If the page has a thumbnail parameter defined in the front matter, Hugo will use .Resources.Get to grab the image.
{{- with .Params.thumbnail }} {{ $thumbnail := $.Resources.Get . }} Resources vs resources
The resources.Get function↗ (little r) I used earlier works on global resources, like the image and font stored in the site's assets/ directory. On the other hand, the Resources.Get method↗ (big R) is used for loading page resources, like the file indicated by the page's thumbnail parameter.
Since I'm calling this method from inside a with block I use a $ in front of the method name to get the parent context↗. Otherwise, the leading . would refer directly to the thumbnail parameter (which isn't a page and so doesn't have the method available4).
Anyhoo, after the thumbnail is loaded, I use the Fit image processing method↗ to scale down the thumbnail. It is then passed to the images.Overlay function↗ to overlay it near the top right corner of the og_base.png image5.
{{ with $thumbnail }} {{ $img = $img.Filter (images.Overlay (.Process &#34;fit 300x250&#34;) 875 38 )}} {{ end }} {{ end }} Then I insert the desired text:
{{ $img = $img.Filter (images.Text $text (dict &#34;color&#34; &#34;#d8d8d8&#34; &#34;size&#34; 64 &#34;linespacing&#34; 2 &#34;x&#34; 40 &#34;y&#34; 300 &#34;font&#34; $font ))}} {{ $img = resources.Copy (path.Join $.Page.RelPermalink &#34;og.png&#34;) $img }} All together now After merging my code in with the existing layouts/partials/opengraph.html, here's what the whole file looks like:
// torchlight! {&#34;lineNumbers&#34;: true} {{ $img := resources.Get &#34;og_base.png&#34; }} &lt;!-- [tl! **:2] --&gt; {{ $font := resources.Get &#34;/FiraMono-Regular.ttf&#34; }} {{ $text := &#34;&#34; }} &lt;meta property=&#34;og:title&#34; content=&#34;{{ .Title }}&#34; /&gt; &lt;meta property=&#34;og:description&#34; content=&#34;{{ with .Description }}{{ . }}{{ else }}{{if .IsPage}}{{ .Summary }}{{ else }}{{ with .Site.Params.description }}{{ . }}{{ end }}{{ end }}{{ end }}&#34; /&gt; &lt;meta property=&#34;og:type&#34; content=&#34;{{ if .IsPage }}article{{ else }}website{{ end }}&#34; /&gt; &lt;meta property=&#34;og:url&#34; content=&#34;{{ .Permalink }}&#34; /&gt; &lt;meta property=&#34;og:locale&#34; content=&#34;{{ .Lang }}&#34; /&gt; {{- if .IsHome }} &lt;!-- [tl! **:2] --&gt; {{ $text = .Site.Params.Description }} {{- end }} {{- if .IsPage }} {{- $iso8601 := &#34;2006-01-02T15:04:05-07:00&#34; -}} &lt;meta property=&#34;article:section&#34; content=&#34;{{ .Section }}&#34; /&gt; {{ with .PublishDate }}&lt;meta property=&#34;article:published_time&#34; {{ .Format $iso8601 | printf &#34;content=%q&#34; | safeHTMLAttr }} /&gt;{{ end }} {{ with .Lastmod }}&lt;meta property=&#34;article:modified_time&#34; {{ .Format $iso8601 | printf &#34;content=%q&#34; | safeHTMLAttr }} /&gt;{{ end }} {{ $text = .Page.Title }} &lt;!-- [tl! ** ] --&gt; {{ end }} {{- with .Params.thumbnail }} &lt;!-- [tl! **:start] --&gt; {{ $thumbnail := $.Resources.Get . }} {{ with $thumbnail }} {{ $img = $img.Filter (images.Overlay (.Process &#34;fit 300x250&#34;) 875 38 )}} {{ end }} {{ end }} {{ $img = $img.Filter (images.Text $text (dict &#34;color&#34; &#34;#d8d8d8&#34; &#34;size&#34; 64 &#34;linespacing&#34; 2 &#34;x&#34; 40 &#34;y&#34; 300 &#34;font&#34; $font ))}} {{ $img = resources.Copy (path.Join $.Page.RelPermalink &#34;og.png&#34;) $img }} &lt;!-- [tl! **:end] --&gt; &lt;meta property=&#34;og:image&#34; content=&#34;{{$img.Permalink}}&#34;&gt; &lt;meta property=&#34;og:image:width&#34; content=&#34;{{$img.Width}}&#34; /&gt; &lt;meta property=&#34;og:image:height&#34; content=&#34;{{$img.Height}}&#34; /&gt; &lt;meta name=&#34;twitter:card&#34; content=&#34;summary_large_image&#34; /&gt; &lt;meta name=&#34;twitter:title&#34; content=&#34;{{ .Title }}&#34; /&gt; &lt;meta name=&#34;twitter:description&#34; content=&#34;{{ with .Description }}{{ . }}{{ else }}{{if .IsPage}}{{ .Summary }}{{ else }}{{ with .Site.Params.description }}{{ . }}{{ end }}{{ end }}{{ end -}}&#34;/&gt; &lt;meta name=&#34;twitter:image&#34; content=&#34;{{$img.Permalink}}&#34; /&gt; {{- with .Params.audio }}&lt;meta property=&#34;og:audio&#34; content=&#34;{{ . }}&#34; /&gt;{{ end }} {{- with .Site.Params.title }}&lt;meta property=&#34;og:site_name&#34; content=&#34;{{ . }}&#34; /&gt;{{ end }} {{- with .Params.videos }}{{- range . }} &lt;meta property=&#34;og:video&#34; content=&#34;{{ . | absURL }}&#34; /&gt; {{ end }}{{ end }} And it works! I'm sure this could be further optimized by someone who knows what they're doing6. I'd really like to find a better way of positioning the thumbnail overlay to better account for different heights and widths. But for now, I'm pretty happy with how it works, and I enjoyed learning more about Hugo along the way.
You're looking at it.&#160;&#x21a9;&#xfe0e;
My current theme doesn't make use of the thumbnails, but a previous theme did so I've got a bunch of posts with thumbnails still assigned. And now I've got a use for them again!&#160;&#x21a9;&#xfe0e;
Or a web designer, if I'm being honest.&#160;&#x21a9;&#xfe0e;
Hugo scoping is kind of wild.&#160;&#x21a9;&#xfe0e;
The overlay is placed using absolute X and Y coordinates. There's probably a way to tell it &quot;offset the top-right corner of the overlay 20x20 from the top right of the base image&quot; but I ran out of caffeine to figure that out at this time. Let me know if you know a trick!&#160;&#x21a9;&#xfe0e;
Like Future John, perhaps? Past John loves leaving stuff for that guy to figure out.&#160;&#x21a9;&#xfe0e;
`,description:"Using Hugo built-in functions to dynamically generate OpenGraph share images for every post.",url:"https://runtimeterror.dev/dynamic-opengraph-images-with-hugo/"},"https://runtimeterror.dev/display-tempest-weather-static-site/":{title:"Displaying Data from a Tempest Weather Station on a Static Site",tags:["api","cicd","javascript","meta","serverless"],content:"As I covered briefly in a recent Scribble↗, I was inspired by the way Kris's omg.lol page↗ displays realtime data from his Weatherflow Tempest weather station↗. I thought that was really neat and wanted to do the same on my omg.lol page↗ with data from my own Tempest, but I wanted to find a way to do it without needing to include an authenticated API call in the client-side JavaScript.\nI realized I could use a GitHub Actions workflow to retrieve the data from the authenticated Tempest API, post it somewhere publicly accessible, and then have the client-side code fetch the data from there without needing any authentication. After a few days of tinkering, I came up with a presentation I'm happy with.\nThis post will cover how I did it.\nRetrieve Weather Data from Tempest API To start, I want to play with the API a bit to see what the responses look like. Before I can talk to the API, though, I need to generate a new token for my account at https://tempestwx.com/settings/tokens. I also make a note of my station ID, and store both of those values in my shell for easier use.\nread wx_token # [tl! .cmd:1] read wx_station After browsing the Tempest API Explorer a little, it seems to me like the /better_forecast endpoint↗ will probably be the easiest to work with, particularly since it lets the user choose which units will be used in the response. That will keep me from having to do metric-to-imperial conversions for many of the data.\nSo I start out calling the API like this:\ncurl -sL &#34;https://swd.weatherflow.com/swd/rest/better_forecast?station_id=$wx_station&amp;token=$wx_token&amp;units_temp=f&amp;units_wind=mph&amp;units_pressure=inhg&amp;units_precip=in&amp;units_distance=mi&#34; \\ | jq # [tl! .cmd:-1] { # [tl! .nocopy:start] &#34;current_conditions&#34;: { &#34;air_density&#34;: 1.2, &#34;air_temperature&#34;: 59.0, &#34;brightness&#34;: 1, &#34;conditions&#34;: &#34;Rain Possible&#34;, &#34;delta_t&#34;: 2.0, &#34;dew_point&#34;: 55.0, &#34;feels_like&#34;: 59.0, &#34;icon&#34;: &#34;possibly-rainy-night&#34;, &#34;is_precip_local_day_rain_check&#34;: true, &#34;is_precip_local_yesterday_rain_check&#34;: true, &#34;lightning_strike_count_last_1hr&#34;: 0, &#34;lightning_strike_count_last_3hr&#34;: 0, &#34;lightning_strike_last_distance&#34;: 12, &#34;lightning_strike_last_distance_msg&#34;: &#34;11 - 13 mi&#34;, &#34;lightning_strike_last_epoch&#34;: 1706394852, &#34;precip_accum_local_day&#34;: 0, &#34;precip_accum_local_yesterday&#34;: 0.05, &#34;precip_minutes_local_day&#34;: 0, &#34;precip_minutes_local_yesterday&#34;: 28, &#34;pressure_trend&#34;: &#34;falling&#34;, &#34;relative_humidity&#34;: 88, &#34;sea_level_pressure&#34;: 29.89, &#34;solar_radiation&#34;: 0, &#34;station_pressure&#34;: 29.25, &#34;time&#34;: 1707618643, &#34;uv&#34;: 0, &#34;wet_bulb_globe_temperature&#34;: 57.0, &#34;wet_bulb_temperature&#34;: 56.0, &#34;wind_avg&#34;: 2.0, &#34;wind_direction&#34;: 244, &#34;wind_direction_cardinal&#34;: &#34;WSW&#34;, &#34;wind_gust&#34;: 2.0 }, &#34;forecast&#34;: { # [tl! collapse:start] &#34;daily&#34;: [ { [...], &#34;day_num&#34;: 10, [...], }, { [...], &#34;day_num&#34;: 11, [...], }, { [...], &#34;day_num&#34;: 12, [...], } ] } } # [tl! collapse:end .nocopy:end] So that validates that the endpoint will give me what I want, but I don't really need the extra 10-day forecast since I'm only interested in showing the current conditions. I can start working some jq magic to filter down to just what I'm interested in. And, while I'm at it, I'll stick the API URL in a variable to make that easier to work with.\nendpoint=&#34;https://swd.weatherflow.com/swd/rest/better_forecast?station_id=$wx_station&amp;token=$wx_token&amp;units_temp=f&amp;units_wind=mph&amp;units_pressure=inhg&amp;units_precip=in&amp;units_distance=mi&#34; # [tl! .cmd:1] curl -sL &#34;$endpoint&#34; | jq &#39;.current_conditions&#39; { # [tl! .nocopy:start] &#34;air_density&#34;: 1.2, &#34;air_temperature&#34;: 59.0, &#34;brightness&#34;: 1, &#34;conditions&#34;: &#34;Light Rain&#34;, &#34;delta_t&#34;: 2.0, &#34;dew_point&#34;: 55.0, &#34;feels_like&#34;: 59.0, &#34;icon&#34;: &#34;rainy&#34;, &#34;is_precip_local_day_rain_check&#34;: true, &#34;is_precip_local_yesterday_rain_check&#34;: true, &#34;lightning_strike_count_last_1hr&#34;: 0, &#34;lightning_strike_count_last_3hr&#34;: 0, &#34;lightning_strike_last_distance&#34;: 12, &#34;lightning_strike_last_distance_msg&#34;: &#34;11 - 13 mi&#34;, &#34;lightning_strike_last_epoch&#34;: 1706394852, &#34;precip_accum_local_day&#34;: 0, &#34;precip_accum_local_yesterday&#34;: 0.05, &#34;precip_description&#34;: &#34;Light Rain&#34;, &#34;precip_minutes_local_day&#34;: 0, &#34;precip_minutes_local_yesterday&#34;: 28, &#34;pressure_trend&#34;: &#34;falling&#34;, &#34;relative_humidity&#34;: 88, &#34;sea_level_pressure&#34;: 29.899, &#34;solar_radiation&#34;: 0, &#34;station_pressure&#34;: 29.258, &#34;time&#34;: 1707618703, &#34;uv&#34;: 0, &#34;wet_bulb_globe_temperature&#34;: 57.0, &#34;wet_bulb_temperature&#34;: 56.0, &#34;wind_avg&#34;: 1.0, &#34;wind_direction&#34;: 230, &#34;wind_direction_cardinal&#34;: &#34;SW&#34;, &#34;wind_gust&#34;: 2.0 } # [tl! .nocopy:end] Piping the response through jq '.current_conditions' works well to select that objects, but I'm still not going to want to display all of that information. After some thought, these are the fields I want to hold on to:\nair_temperature conditions feels_like (apparent air temperature) icon precip_accum_local_day (rainfall total for the day) pressure_trend (rising, falling, or steady) relative_humidity sea_level_pressure (the pressure recorded by the station, adjusted for altitude) time (epoch↗ timestamp of the report) wind_direction_cardinal (which way the wind is blowing from) wind_gust I can use more jq wizardry to grab only those fields, and I'll also rename a few of the more cumbersome ones and round some of the values where I don't need full decimal precision:\ncurl -sL &#34;$endpoint&#34; | jq &#39;.current_conditions | {temperature: (.air_temperature | round), conditions, feels_like: (.feels_like | round), icon, rain_today: .precip_accum_local_day, pressure_trend, humidity: .relative_humidity, pressure: ((.sea_level_pressure * 100) | round | . / 100), time, wind_direction: .wind_direction_cardinal, wind_gust}&#39; { # [tl! .cmd:-4,1 .nocopy:start] &#34;temperature&#34;: 58, &#34;conditions&#34;: &#34;Very Light Rain&#34;, &#34;feels_like&#34;: 58, &#34;icon&#34;: &#34;rainy&#34;, &#34;rain_today&#34;: 0.01, &#34;pressure_trend&#34;: &#34;steady&#34;, &#34;humidity&#34;: 91, &#34;pressure&#34;: 29.9, &#34;time&#34;: 1707620142, &#34;wind_direction&#34;: &#34;W&#34;, &#34;wind_gust&#34;: 0.0 } # [tl! .nocopy:end] Now I'm just grabbing the specific data points that I plan to use, and I'm renaming messy names like precip_accum_local_day to things like rain_today to make them a bit less unwieldy. I'm also rounding the temperatures to whole numbers1, and reducing the pressure from three decimal points to just two.\nNow that I've got the data I want, I'll just stash it in a local file for safe keeping:\ncurl -sL &#34;$endpoint&#34; | jq &#39;.current_conditions | {temperature: (.air_temperature | round), conditions, feels_like: (.feels_like | round), icon, rain_today: .precip_accum_local_day, pressure_trend, humidity: .relative_humidity, pressure: ((.sea_level_pressure * 100) | round | . / 100), time, wind_direction: .wind_direction_cardinal, wind_gust}&#39; \\ &gt; tempest.json # [tl! .cmd:-4,1 **] Post to paste.lol I've been using omg.lol↗ for a couple of months now, and I'm constantly discovering new uses for the bundled services. I thought that the paste.lol↗ service would be a great fit for this project. For one it was easy to tie it to a custom domain2, and it's got an easy API↗ that I can use for automating this.\nTo use the API, I'll of course need a token. I can find that at the bottom of my omg.lol Account↗ page, and I'll once again store that as an environment variable. I can then test out the API by creating a new paste:\ncurl -L --request POST --header &#34;Authorization: Bearer $omg_token&#34; \\ # [tl! .cmd] &#34;https://api.omg.lol/address/jbowdre/pastebin/&#34; \\ --data &#39;{&#34;title&#34;: &#34;paste-test&#34;, &#34;content&#34;: &#34;Tastes like paste.&#34;}&#39; { # [tl! .nocopy:9] &#34;request&#34;: { &#34;status_code&#34;: 200, &#34;success&#34;: true }, &#34;response&#34;: { &#34;message&#34;: &#34;OK, your paste has been saved. &lt;a href=\\&#34;https:\\/\\/paste.lol\\/jbowdre\\/paste-test\\&#34; target=\\&#34;_blank\\&#34;&gt;View it live&lt;\\/a&gt;.&#34;, &#34;title&#34;: &#34;paste-test&#34; } } And, sure enough, I can view it at my slick custom domain for my pastes, https://paste.jbowdre.lol/paste-test\nThat page is simple enough, but I'll really want to be sure I can store and retrieve the raw JSON that I captured from the Tempest API. There's a handy button the webpage for that, or I can just append /raw to the URL:\nYep, looks like that will do the trick. One small hurdle, though: I have to send the --data as a JSON object. I already have the JSON file that I pulled from the Tempest API, but I'll need to wrap that inside another layer of JSON. Fortunately, jq can come to the rescue once more.\nrequest_body=&#39;{&#34;title&#34;: &#34;tempest.json&#34;, &#34;content&#34;: &#39;&#34;$(jq -Rsa . tempest.json)&#34;&#39;}&#39; # [tl! .cmd] The jq command here reads the tempest.json file as plaintext (not as a JSON object), and then formats it as a JSON string so that it can be wrapped in the request body JSON:\njq -Rsa &#39;.&#39; tempest.json # [tl! .cmd .nocopy:1] &#34;{\\n \\&#34;temperature\\&#34;: 58,\\n \\&#34;conditions\\&#34;: \\&#34;Heavy Rain\\&#34;,\\n \\&#34;feels_like\\&#34;: 58,\\n \\&#34;icon\\&#34;: \\&#34;rainy\\&#34;,\\n \\&#34;rain_today\\&#34;: 0.05,\\n \\&#34;pressure_trend\\&#34;: \\&#34;steady\\&#34;,\\n \\&#34;humidity\\&#34;: 93,\\n \\&#34;pressure\\&#34;: 29.89,\\n \\&#34;time\\&#34;: 1707620863,\\n \\&#34;wind_direction\\&#34;: \\&#34;S\\&#34;,\\n \\&#34;wind_gust\\&#34;: 1.0\\n}\\n&#34; So then I can repeat the earlier curl but this time pass in $request_body to include the file contents:\ncurl -L --request POST --header &#34;Authorization: Bearer $omg_token&#34; \\ # [tl! .cmd] &#34;https://api.omg.lol/address/jbowdre/pastebin/&#34; \\ --data &#34;$request_body&#34; { # [tl! .nocopy:9] &#34;request&#34;: { &#34;status_code&#34;: 200, &#34;success&#34;: true }, &#34;response&#34;: { &#34;message&#34;: &#34;OK, your paste has been saved. &lt;a href=\\&#34;https:\\/\\/paste.lol\\/jbowdre\\/tempest.json\\&#34; target=\\&#34;_blank\\&#34;&gt;View it live&lt;\\/a&gt;.&#34;, &#34;title&#34;: &#34;tempest.json&#34; } } And there it is, at https://paste.jbowdre.lol/tempest.json/raw:\nAutomate with GitHub Actions At this point, I know the commands needed to retrieve weather data from the Tempest API, and I know what will be needed to post it to the omg.lol pastebin. The process works, but now it's time to automate it. And I'll do that with a simple GitHub Actions workflow↗.\nI create a new GitHub repo↗ to store this (and future?) omg.lol sorcery, and navigate to Settings &gt; Secrets and variables &gt; Actions. I'll create four new repository secrets to hold my variables:\nOMG_ADDR OMG_TOKEN TEMPEST_STATION TEMPEST_TOKEN And I'll create a new file at .github/workflows/tempest.yml to define my new workflow. Here's the start:\n# torchlight! {&#34;lineNumbers&#34;: true} name: Tempest Update on: schedule: - cron: &#34;*/5 * * * *&#34; workflow_dispatch: push: branches: - main defaults: run: shell: bash The on block defines when the workflow will run:\nOn a schedule which fires (roughly3) every five minutes On a workflow_dispatch event (so I can start it manually if I want) When changes are pushed to the main branch And the defaults block makes sure that the following scripts will be run in bash:\n# torchlight! {&#34;lineNumbers&#34;: true} name: Tempest Update # [tl! collapse:start] on: schedule: - cron: &#34;*/5 * * * *&#34; workflow_dispatch: push: branches: - main defaults: run: shell: bash # [tl! collapse:end] jobs: fetch-and-post-tempest: runs-on: ubuntu-latest steps: - name: Fetch Tempest API data # [tl! **:2,3] run: | curl -sL &#34;https://swd.weatherflow.com/swd/rest/better_forecast?station_id=${{ secrets.TEMPEST_STATION }}&amp;token=${{ secrets.TEMPEST_TOKEN }}&amp;units_temp=f&amp;units_wind=mph&amp;units_pressure=inhg&amp;units_precip=in&amp;units_distance=mi&#34; \\ | jq &#39;.current_conditions | {temperature: (.air_temperature | round), conditions, feels_like: (.feels_like | round), icon, rain_today: .precip_accum_local_day, pressure_trend, humidity: .relative_humidity, pressure: ((.sea_level_pressure * 100) | round | . / 100), time, wind_direction: .wind_direction_cardinal, wind_gust}&#39; \\ &gt; tempest.json - name: POST to paste.lol # [tl! **:2,4] run: | request_body=&#39;{&#34;title&#34;: &#34;tempest.json&#34;, &#34;content&#34;: &#39;&#34;$(jq -Rsa . tempest.json)&#34;&#39;}&#39; curl --location --request POST --header &#34;Authorization: Bearer ${{ secrets.OMG_TOKEN }}&#34; \\ &#34;https://api.omg.lol/address/${{ secrets.OMG_ADDR }}/pastebin/&#34; \\ --data &#34;$request_body&#34; Each step in the jobs section should look pretty familiar since those are almost exactly the commands that I used earlier. The only real difference is that they now use the format ${{ secrets.SECRET_NAME }} to pull in the repository secrets I just created.\nOnce I save, commit, and push this new file to the repo, it will automatically execute, and I can go to the Actions↗ tab to confirm that the run was succesful. I can also check https://paste.jbowdre.lol/tempest.json to confirm that the contents have updated.\nShow Weather on Homepage By this point, I've fetched the weather data from Tempest, filtered it for just the fields I want to see, and posted the condensed JSON to a publicly-accessible pastebin. Now I just need to figure out how to make it show up on my omg.lol homepage. Rather than implementing these changes on the public site right away, I'll start with a barebones tempest.html that I can test with locally. Here's the initial structure that I'll be building from:\n&lt;!-- torchlight! {&#34;lineNumbers&#34;: true} --&gt; &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;meta charset=&#34;utf-8&#34;&gt; &lt;link rel=&#34;stylesheet&#34; href=&#34;https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.1.2/css/all.min.css&#34; integrity=&#34;sha512-1sCRPdkRXhBV2PBLUdRb4tMg1w2YPf37qatUFeS7zlBy7jJI8Lf4VHwWfZZfpXtYSLy85pkm9GaYVYMfw5BC1A==&#34; crossorigin=&#34;anonymous&#34;&gt; &lt;title&gt;Weather Test&lt;/title&gt; &lt;script&gt; // TODO &lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Local Weather&lt;/h1&gt; &lt;ul&gt; &lt;li&gt;Conditions: &lt;i class=&#39;fa-solid fa-cloud-sun-rain&#39;&gt;&lt;/i&gt;&lt;span id=&#34;conditions&#34;&gt;&lt;/span&gt;&lt;/li&gt; &lt;li&gt;Temperature: &lt;i class=&#39;fa-solid fa-temperature-half&#39;&gt;&lt;/i&gt; &lt;span id=&#34;temp&#34;&gt;&lt;/span&gt;&lt;/li&gt; &lt;li&gt;Humidity: &lt;span id=&#34;humidity&#34;&gt;&lt;/span&gt;&lt;/li&gt; &lt;li&gt;Wind: &lt;span id=&#34;wind&#34;&gt;&lt;/span&gt;&lt;/li&gt; &lt;li&gt;Precipitation: &lt;i class=&#39;fa-solid fa-droplet-slash&#39;&gt;&lt;/i&gt;&lt;span id=&#34;rainToday&#34;&gt;&lt;/span&gt;&lt;/li&gt; &lt;li&gt;Pressure: &lt;i class=&#39;fa-solid fa-arrow-right-long&#39;&gt;&lt;/i&gt;&lt;span id=&#34;pressure&#34;&gt;&lt;/span&gt;&lt;/li&gt; &lt;li&gt;&lt;i&gt;Last Update: &lt;span id=&#34;time&#34;&gt;&lt;/span&gt;&lt;/i&gt;&lt;/li&gt; &lt;/body&gt; &lt;/html&gt; In the &lt;head&gt;, I'm specifying the character set so that I won't get any weird reactions when using exotic symbols like °, and I'm also pulling in the Font Awesome↗ stylesheet so I can easily use those icons in the weather display. The &lt;body&gt; just holds empty &lt;span&gt;s that will later be populated with data via JavaScript.\nI only included a few icons here. I'm going to try to make those icons dynamic so that they'll change depending on how hot/cold it is or what the pressure trend is doing. I don't plan to toy with the humidity, wind, or last update icons (yet) so I'm not going to bother with testing those locally.\nWith that framework in place, I can use python3 -m http.server 8080 to serve the page at http://localhost:8080/tempest.html.\nIt's rather sparse, but it works. I won't be tinkering with any of that HTML from this point on, I'll just focus on the (currently-empty) &lt;script&gt; block. I don't think I'm super great with JavaScript, so I leaned heavily upon the code on Kris's page↗ to get started and then put my own spin on it as I went along.\nBefore I can do much of anything, though, I'll need to start by retrieving the JSON data from my paste.lol endpoint and parsing the data:\n// torchlight! {&#34;lineNumbers&#34;: true} const wx_endpoint = &#39;https://paste.jbowdre.lol/tempest.json/raw&#39;; // [tl! reindex(8)] // fetch data from pastebin fetch(wx_endpoint) .then(res =&gt; res.json()) .then(function(res){ // parse data let updateTime = res.time; let conditions = res.conditions; let temp = res.temperature; let humidity = res.humidity; let wind = res.wind_gust; let rainToday = res.rain_today; let pressure = res.pressure; // display data document.getElementById(&#39;time&#39;).innerHTML = updateTime; document.getElementById(&#39;conditions&#39;).innerHTML = conditions; document.getElementById(&#39;temp&#39;).innerHTML = temp; document.getElementById(&#39;humidity&#39;).innerHTML = humidity; document.getElementById(&#39;wind&#39;).innerHTML = wind; document.getElementById(&#39;rainToday&#39;).innerHTML = rainToday; document.getElementById(&#39;pressure&#39;).innerHTML = pressure; }); So this is fetching the data from the pastebin, mapping variables to the JSON fields, and using document.getElementById() to select the placeholder &lt;span&gt;s by their id and replace the corresponding innerHTML with the appropriate values.\nThe result isn't very pretty, but it's a start:\nThat Unix epoch timestamp↗ stands out as being particularly unwieldy. Instead of the exact timestamp, I'd prefer to show that as the relative time since the last update. I'll do that with the Intl.RelativeTimeFormat object↗.\n// torchlight! {&#34;lineNumbers&#34;: true} const wx_endpoint = &#39;https://paste.jbowdre.lol/tempest.json/raw&#39;; // [tl! reindex(8)] // get ready to calculate relative time [tl! ++:start **:start] const units = { year : 24 * 60 * 60 * 1000 * 365, month : 24 * 60 * 60 * 1000 * 365/12, day : 24 * 60 * 60 * 1000, hour : 60 * 60 * 1000, minute: 60 * 1000, second: 1000 } let rtf = new Intl.RelativeTimeFormat(&#39;en&#39;, { numeric: &#39;auto&#39; }) let getRelativeTime = (d1, d2 = new Date()) =&gt; { let elapsed = d1 - d2 for (var u in units) if (Math.abs(elapsed) &gt; units[u] || u === &#39;second&#39;) return rtf.format(Math.round(elapsed/units[u]), u) } // [tl! ++:end **:end] // fetch data from pastebin fetch(wx_endpoint) .then(res =&gt; res.json()) .then(function(res){ // calculate age of last update [tl! ++:start **:start] let updateTime = res.time; updateTime = parseInt(updateTime); updateTime = updateTime*1000; updateTime = new Date(updateTime); let updateAge = getRelativeTime(updateTime); // parse data let updateTime = res.time; // [tl! -- reindex(-1)] let conditions = res.conditions; let temp = res.temperature; let humidity = res.humidity; let wind = res.wind_gust; let rainToday = res.rain_today; let pressure = res.pressure; // display data document.getElementById(&#39;time&#39;).innerHTML = updateAge; // [tl! ++ **] document.getElementById(&#39;time&#39;).innerHTML = updateTime; // [tl! -- reindex(-1)] document.getElementById(&#39;conditions&#39;).innerHTML = conditions; document.getElementById(&#39;temp&#39;).innerHTML = temp; document.getElementById(&#39;humidity&#39;).innerHTML = humidity; document.getElementById(&#39;wind&#39;).innerHTML = wind; document.getElementById(&#39;rainToday&#39;).innerHTML = rainToday; document.getElementById(&#39;pressure&#39;).innerHTML = pressure; }); That's a bit more friendly:\nAlright, let's massage some of the other fields a little. I'll add units and labels, translate temperature and speed values to metric for my international friends, and combine a few additional fields for more context. I'm also going to make sure everything is lowercase to match the, uh, aesthetic of my omg page.\n// torchlight! {&#34;lineNumbers&#34;: true} const wx_endpoint = &#39;https://paste.jbowdre.lol/tempest.json/raw&#39;; // [tl! reindex(8) collapse:start] // get ready to calculate relative time const units = { year : 24 * 60 * 60 * 1000 * 365, month : 24 * 60 * 60 * 1000 * 365/12, day : 24 * 60 * 60 * 1000, hour : 60 * 60 * 1000, minute: 60 * 1000, second: 1000 } let rtf = new Intl.RelativeTimeFormat(&#39;en&#39;, { numeric: &#39;auto&#39; }) let getRelativeTime = (d1, d2 = new Date()) =&gt; { let elapsed = d1 - d2 for (var u in units) if (Math.abs(elapsed) &gt; units[u] || u === &#39;second&#39;) return rtf.format(Math.round(elapsed/units[u]), u) } // fetch data from pastebin fetch(wx_endpoint) .then(res =&gt; res.json()) .then(function(res){ // calculate age of last update let updateTime = res.time; updateTime = parseInt(updateTime); updateTime = updateTime*1000; updateTime = new Date(updateTime); let updateAge = getRelativeTime(updateTime); // [tl! collapse:end] // parse data let updateTime = res.time; let conditions = (res.conditions).toLowerCase(); // [tl! ++ **] let conditions = res.conditions; // [tl! -- reindex(-1)] let temp = `${res.temperature}°f (${(((res.temperature - 32) * 5) / 9).toFixed(1)}°c)`; // [tl! ++ **] let temp = res.temperature; // [tl! -- reindex(-1)] let humidity = `${res.humidity}% humidity`; // [tl! ++ **] let humidity = res.humidity; // [tl! -- reindex(-1)] let wind = `${res.wind_gust}mph (${(res.wind_gust*1.609344).toFixed(1)}kph) from ${(res.wind_direction).toLowerCase()}`; // [tl! ++ **] let wind = res.wind_gust; // [tl! -- reindex(-1)] let rainToday = `${res.rain_today}&#34; rain today`; // [tl! ++ **] let rainToday = res.rain_today; // [tl! -- reindex(-1)] let pressureTrend = res.pressure_trend; // [tl! ++:1 **:1] let pressure = `${res.pressure}inhg and ${pressureTrend}`; let pressure = res.pressure; // [tl! -- reindex(-1)] // display data [tl! collapse:start] document.getElementById(&#39;time&#39;).innerHTML = updateAge; document.getElementById(&#39;time&#39;).innerHTML = updateTime; document.getElementById(&#39;conditions&#39;).innerHTML = conditions; document.getElementById(&#39;temp&#39;).innerHTML = temp; document.getElementById(&#39;humidity&#39;).innerHTML = humidity; document.getElementById(&#39;wind&#39;).innerHTML = wind; document.getElementById(&#39;rainToday&#39;).innerHTML = rainToday; document.getElementById(&#39;pressure&#39;).innerHTML = pressure; }); // [tl! collapse:end] That's a bit better:\nNow let's add some conditionals into the mix. I'd like to display the feels_like temperature but only if it's five or more degrees away from the actual air temperature. And if there hasn't been any rain, the page should show no rain today instead of 0.00&quot; rain today.\n// torchlight! {&#34;lineNumbers&#34;: true} const wx_endpoint = &#39;https://paste.jbowdre.lol/tempest.json/raw&#39;; // [tl! reindex(8) collapse:start] // get ready to calculate relative time const units = { year : 24 * 60 * 60 * 1000 * 365, month : 24 * 60 * 60 * 1000 * 365/12, day : 24 * 60 * 60 * 1000, hour : 60 * 60 * 1000, minute: 60 * 1000, second: 1000 } let rtf = new Intl.RelativeTimeFormat(&#39;en&#39;, { numeric: &#39;auto&#39; }) let getRelativeTime = (d1, d2 = new Date()) =&gt; { let elapsed = d1 - d2 for (var u in units) if (Math.abs(elapsed) &gt; units[u] || u === &#39;second&#39;) return rtf.format(Math.round(elapsed/units[u]), u) } // // fetch data from pastebin fetch(wx_endpoint) .then(res =&gt; res.json()) .then(function(res){ // calculate age of last update let updateTime = res.time; updateTime = parseInt(updateTime); updateTime = updateTime*1000; updateTime = new Date(updateTime); let updateAge = getRelativeTime(updateTime); // [tl! collapse:end] // parse data let updateTime = res.time; let conditions = (res.conditions).toLowerCase(); let tempDiff = Math.abs(res.temperature - res.feels_like); // [tl! ++ **] let temp = `${res.temperature}°f (${(((res.temperature - 32) * 5) / 9).toFixed(1)}°c)`; if (tempDiff &gt;= 5) { // [tl! ++:2 **:2] temp += `, feels ${res.feels_like}°f (${(((res.feels_like - 32) *5) / 9).toFixed(1)}°c)`; } let humidity = `${res.humidity}% humidity`; let wind = `${res.wind_gust}mph (${(res.wind_gust*1.609344).toFixed(1)}kph) from ${(res.wind_direction).toLowerCase()}`; let rainToday; // [tl! ++:5 **:5] if (res.rain_today === 0) { rainToday = &#39;no rain today&#39;; } else { rainToday = `${res.rain_today}&#34; rain today`; } let rainToday = `${res.rain_today}&#34; rain today`; // [tl! -- reindex(-1)] let pressureTrend = res.pressure_trend; let pressure = `${res.pressure}inhg and ${pressureTrend}`; // display data [tl! collapse:start] document.getElementById(&#39;time&#39;).innerHTML = updateAge; document.getElementById(&#39;time&#39;).innerHTML = updateTime; document.getElementById(&#39;conditions&#39;).innerHTML = conditions; document.getElementById(&#39;temp&#39;).innerHTML = temp; document.getElementById(&#39;humidity&#39;).innerHTML = humidity; document.getElementById(&#39;wind&#39;).innerHTML = wind; document.getElementById(&#39;rainToday&#39;).innerHTML = rainToday; document.getElementById(&#39;pressure&#39;).innerHTML = pressure; }); // [tl! collapse:end] For testing this, I can manually alter4 the contents of the pastebin to ensure the values are what I need:\n// torchlight! {&#34;lineNumbers&#34;: true} { &#34;temperature&#34;: 57, // [tl! **] &#34;conditions&#34;: &#34;Rain Likely&#34;, &#34;feels_like&#34;: 67, // [tl! **] &#34;icon&#34;: &#34;rainy&#34;, &#34;rain_today&#34;: 0, // [tl! **] &#34;pressure_trend&#34;: &#34;steady&#34;, &#34;humidity&#34;: 92, &#34;pressure&#34;: 29.91, &#34;time&#34;: 1707676590, &#34;wind_direction&#34;: &#34;WSW&#34;, &#34;wind_gust&#34;: 2 } The final touch will be to change the icons depending on the values of certain fields:\nThe conditions icon will vary from to , roughly corresponding to the icon value from the Tempest API. The temperature icon will range from to depending on the temperature. The precipitation icon will be for no precipitation, and progress through and for a bit more rain, and cap out at for lots of rain5. The pressure icon should change based on the trend: , , and . I'll set up a couple of constants to define the ranges, and a few more for mapping the values to icons. Then I'll use the Array.prototype.find method to select the appropriate upper value from each range. And then I leverage document.getElementsByClassName to match the placeholder icon classes and dynamically replace them.\n// torchlight! {&#34;lineNumbers&#34;: true} const wx_endpoint = &#39;https://paste.jbowdre.lol/tempest.json/raw&#39;; // [tl! reindex(8) collapse:start] // get ready to calculate relative time const units = { year : 24 * 60 * 60 * 1000 * 365, month : 24 * 60 * 60 * 1000 * 365/12, day : 24 * 60 * 60 * 1000, hour : 60 * 60 * 1000, minute: 60 * 1000, second: 1000 } let rtf = new Intl.RelativeTimeFormat(&#39;en&#39;, { numeric: &#39;auto&#39; }) let getRelativeTime = (d1, d2 = new Date()) =&gt; { let elapsed = d1 - d2 for (var u in units) if (Math.abs(elapsed) &gt; units[u] || u === &#39;second&#39;) return rtf.format(Math.round(elapsed/units[u]), u) } // [tl! collapse:end] // set up temperature and rain ranges [tl! ++:start **:start] const tempRanges = [ { upper: 32, label: &#39;cold&#39; }, { upper: 60, label: &#39;cool&#39; }, { upper: 80, label: &#39;warm&#39; }, { upper: Infinity, label: &#39;hot&#39; } ]; const rainRanges = [ { upper: 0.02, label: &#39;none&#39; }, { upper: 0.2, label: &#39;light&#39; }, { upper: 1.4, label: &#39;moderate&#39; }, { upper: Infinity, label: &#39;heavy&#39; } ] // maps for selecting icons const CLASS_MAP_PRESS = { &#39;steady&#39;: &#39;fa-solid fa-arrow-right-long&#39;, &#39;rising&#39;: &#39;fa-solid fa-arrow-trend-up&#39;, &#39;falling&#39;: &#39;fa-solid fa-arrow-trend-down&#39; } const CLASS_MAP_RAIN = { &#39;none&#39;: &#39;fa-solid fa-droplet-slash&#39;, &#39;light&#39;: &#39;fa-solid fa-glass-water-droplet&#39;, &#39;moderate&#39;: &#39;fa-solid fa-glass-water&#39;, &#39;heavy&#39;: &#39;fa-solid fa-bucket&#39; } const CLASS_MAP_TEMP = { &#39;hot&#39;: &#39;fa-solid fa-thermometer-full&#39;, &#39;warm&#39;: &#39;fa-solid fa-thermometer-half&#39;, &#39;cool&#39;: &#39;fa-solid fa-thermometer-quarter&#39;, &#39;cold&#39;: &#39;fa-solid fa-thermometer-empty&#39; } const CLASS_MAP_WX = { &#39;clear-day&#39;: &#39;fa-solid fa-sun&#39;, &#39;clear-night&#39;: &#39;fa-solid fa-moon&#39;, &#39;cloudy&#39;: &#39;fa-solid fa-cloud&#39;, &#39;foggy&#39;: &#39;fa-solid fa-cloud-showers-smog&#39;, &#39;partly-cloudy-day&#39;: &#39;fa-solid fa-cloud-sun&#39;, &#39;partly-cloudy-night&#39;: &#39;fa-solid fa-cloud-moon&#39;, &#39;possibly-rainy-day&#39;: &#39;fa-solid fa-cloud-sun-rain&#39;, &#39;possibly-rainy-night&#39;: &#39;fa-solid fa-cloud-moon-rain&#39;, &#39;possibly-sleet-day&#39;: &#39;fa-solid fa-cloud-meatball&#39;, &#39;possibly-sleet-night&#39;: &#39;fa-solid fa-cloud-moon-rain&#39;, &#39;possibly-snow-day&#39;: &#39;fa-solid fa-snowflake&#39;, &#39;possibly-snow-night&#39;: &#39;fa-solid fa-snowflake&#39;, &#39;possibly-thunderstorm-day&#39;: &#39;fa-solid fa-cloud-bolt&#39;, &#39;possibly-thunderstorm-night&#39;: &#39;fa-solid fa-cloud-bolt&#39;, &#39;rainy&#39;: &#39;fa-solid fa-cloud-showers-heavy&#39;, &#39;sleet&#39;: &#39;fa-solid fa-cloud-rain&#39;, &#39;snow&#39;: &#39;fa-solid fa-snowflake&#39;, &#39;thunderstorm&#39;: &#39;fa-solid fa-cloud-bolt&#39;, &#39;windy&#39;: &#39;fa-solid fa-wind&#39;, } // [tl! ++:end **:end] // fetch data from pastebin [tl! collapse:10] fetch(wx_endpoint) .then(res =&gt; res.json()) .then(function(res){ // calculate age of last update let updateTime = res.time; updateTime = parseInt(updateTime); updateTime = updateTime*1000; updateTime = new Date(updateTime); let updateAge = getRelativeTime(updateTime); // parse data let updateTime = res.time; let conditions = (res.conditions).toLowerCase(); let tempDiff = Math.abs(res.temperature - res.feels_like); let temp = `${res.temperature}°f (${(((res.temperature - 32) * 5) / 9).toFixed(1)}°c)`; if (tempDiff &gt;= 5) { temp += `, feels ${res.feels_like}°f (${(((res.feels_like - 32) *5) / 9).toFixed(1)}°c)`; } let tempLabel = (tempRanges.find(range =&gt; res.feels_like &lt; range.upper)).label; // [tl! ++ **] let humidity = `${res.humidity}% humidity`; let wind = `${res.wind_gust}mph (${(res.wind_gust*1.609344).toFixed(1)}kph) from ${(res.wind_direction).toLowerCase()}`; let rainLabel = (rainRanges.find(range =&gt; res.rain_today &lt; range.upper)).label; // [tl! ++ **] let rainToday; if (res.rain_today === 0) { rainToday = &#39;no rain today&#39;; } else { rainToday = `${res.rain_today}&#34; rain today`; } let rainToday = `${res.rain_today}&#34; rain today`; let pressureTrend = res.pressure_trend; let pressure = `${res.pressure}inhg and ${pressureTrend}`; let icon = res.icon; // [tl! ++ **] // display data [tl! collapse:8] document.getElementById(&#39;time&#39;).innerHTML = updateAge; document.getElementById(&#39;time&#39;).innerHTML = updateTime; document.getElementById(&#39;conditions&#39;).innerHTML = conditions; document.getElementById(&#39;temp&#39;).innerHTML = temp; document.getElementById(&#39;humidity&#39;).innerHTML = humidity; document.getElementById(&#39;wind&#39;).innerHTML = wind; document.getElementById(&#39;rainToday&#39;).innerHTML = rainToday; document.getElementById(&#39;pressure&#39;).innerHTML = pressure; // update icons [tl! ++:4 **:4] document.getElementsByClassName(&#39;fa-cloud-sun-rain&#39;)[0].classList = CLASS_MAP_WX[icon]; document.getElementsByClassName(&#39;fa-temperature-half&#39;)[0].classList = CLASS_MAP_TEMP[tempLabel]; document.getElementsByClassName(&#39;fa-droplet-slash&#39;)[0].classList = CLASS_MAP_RAIN[rainLabel]; document.getElementsByClassName(&#39;fa-arrow-right-long&#39;)[0].classList = CLASS_MAP_PRESS[pressureTrend]; }); And I can see that the icons do indeed match the conditions:\nOMG, LOL, Let's Do It Now that all that prep work is out of the way, transposing the code into the live omg.lol homepage doesn't really take much effort.\nI'll just use the online editor to drop this at the bottom of my web page:\n// torchlight! {&#34;lineNumbers&#34;:true} # /proc/wx &lt;!-- [tl! reindex(47)] --&gt; - &lt;span id=&#34;conditions&#34;&gt;&lt;/span&gt; {cloud-sun-rain} - &lt;span id=&#34;temp&#34;&gt;&lt;/span&gt; {temperature-half} - &lt;span id=&#34;humidity&#34;&gt;&lt;/span&gt; {droplet} - &lt;span id=&#34;wind&#34;&gt;&lt;/span&gt; {wind} - &lt;span id=&#34;rainToday&#34;&gt;&lt;/span&gt; {droplet-slash} - &lt;span id=&#34;pressure&#34;&gt;&lt;/span&gt; {arrow-right-long} - &lt;i&gt;as of &lt;span id=&#34;time&#34;&gt;&lt;/span&gt;&lt;/i&gt; {clock} A bit further down the editor page, there's a box for inserting additional content into the page's &lt;head&gt; block. That's a great place to past in the entirety of my script:\n// torchlight! {&#34;lineNumbers&#34;:true} &lt;script&gt; // [tl! reindex(3)] const wx_endpoint = &#39;https://paste.jbowdre.lol/tempest.json/raw&#39;; // get ready to calculate relative time const units = { year : 24 * 60 * 60 * 1000 * 365, month : 24 * 60 * 60 * 1000 * 365/12, day : 24 * 60 * 60 * 1000, hour : 60 * 60 * 1000, minute: 60 * 1000, second: 1000 } let rtf = new Intl.RelativeTimeFormat(&#39;en&#39;, { numeric: &#39;auto&#39; }) let getRelativeTime = (d1, d2 = new Date()) =&gt; { let elapsed = d1 - d2 for (var u in units) if (Math.abs(elapsed) &gt; units[u] || u === &#39;second&#39;) return rtf.format(Math.round(elapsed/units[u]), u) } // set up temperature and rain ranges const tempRanges = [ { upper: 32, label: &#39;cold&#39; }, { upper: 60, label: &#39;cool&#39; }, { upper: 80, label: &#39;warm&#39; }, { upper: Infinity, label: &#39;hot&#39; } ]; const rainRanges = [ { upper: 0.02, label: &#39;none&#39; }, { upper: 0.2, label: &#39;light&#39; }, { upper: 1.4, label: &#39;moderate&#39; }, { upper: Infinity, label: &#39;heavy&#39; } ] // maps for selecting icons const CLASS_MAP_PRESS = { &#39;steady&#39;: &#39;fa-solid fa-arrow-right-long&#39;, &#39;rising&#39;: &#39;fa-solid fa-arrow-trend-up&#39;, &#39;falling&#39;: &#39;fa-solid fa-arrow-trend-down&#39; } const CLASS_MAP_RAIN = { &#39;none&#39;: &#39;fa-solid fa-droplet-slash&#39;, &#39;light&#39;: &#39;fa-solid fa-glass-water-droplet&#39;, &#39;moderate&#39;: &#39;fa-solid fa-glass-water&#39;, &#39;heavy&#39;: &#39;fa-solid fa-bucket&#39; } const CLASS_MAP_TEMP = { &#39;hot&#39;: &#39;fa-solid fa-thermometer-full&#39;, &#39;warm&#39;: &#39;fa-solid fa-thermometer-half&#39;, &#39;cool&#39;: &#39;fa-solid fa-thermometer-quarter&#39;, &#39;cold&#39;: &#39;fa-solid fa-thermometer-empty&#39; } const CLASS_MAP_WX = { &#39;clear-day&#39;: &#39;fa-solid fa-sun&#39;, &#39;clear-night&#39;: &#39;fa-solid fa-moon&#39;, &#39;cloudy&#39;: &#39;fa-solid fa-cloud&#39;, &#39;foggy&#39;: &#39;fa-solid fa-cloud-showers-smog&#39;, &#39;partly-cloudy-day&#39;: &#39;fa-solid fa-clouds-sun&#39;, &#39;partly-cloudy-night&#39;: &#39;fa-solid fa-clouds-moon&#39;, &#39;possibly-rainy-day&#39;: &#39;fa-solid fa-cloud-sun-rain&#39;, &#39;possibly-rainy-night&#39;: &#39;fa-solid fa-cloud-moon-rain&#39;, &#39;possibly-sleet-day&#39;: &#39;fa-solid fa-cloud-meatball&#39;, &#39;possibly-sleet-night&#39;: &#39;fa-solid fa-cloud-moon-rain&#39;, &#39;possibly-snow-day&#39;: &#39;fa-solid fa-snowflake&#39;, &#39;possibly-snow-night&#39;: &#39;fa-solid fa-snowflake&#39;, &#39;possibly-thunderstorm-day&#39;: &#39;fa-solid fa-cloud-bolt&#39;, &#39;possibly-thunderstorm-night&#39;: &#39;fa-solid fa-cloud-bolt&#39;, &#39;rainy&#39;: &#39;fa-solid fa-cloud-showers-heavy&#39;, &#39;sleet&#39;: &#39;fa-solid fa-cloud-rain&#39;, &#39;snow&#39;: &#39;fa-solid fa-snowflake&#39;, &#39;thunderstorm&#39;: &#39;fa-solid fa-cloud-bolt&#39;, &#39;windy&#39;: &#39;fa-solid fa-wind&#39;, } // fetch data from pastebin fetch(wx_endpoint) .then(res =&gt; res.json()) .then(function(res){ // calculate age of last update let updateTime = res.time; updateTime = parseInt(updateTime); updateTime = updateTime*1000; updateTime = new Date(updateTime); let updateAge = getRelativeTime(updateTime); // parse data let conditions = (res.conditions).toLowerCase(); let tempDiff = Math.abs(res.temperature - res.feels_like); let temp = `${res.temperature}°f (${(((res.temperature - 32) * 5) / 9).toFixed(1)}°c)`; if (tempDiff &gt;= 5) { temp += `, feels ${res.feels_like}°f (${(((res.feels_like - 32) *5) / 9).toFixed(1)}°c)`; } let tempLabel = (tempRanges.find(range =&gt; res.feels_like &lt; range.upper)).label; let humidity = `${res.humidity}% humidity`; let wind = `${res.wind_gust}mph (${(res.wind_gust*1.609344).toFixed(1)}kph) from ${(res.wind_direction).toLowerCase()}`; let rainLabel = (rainRanges.find(range =&gt; res.rain_today &lt; range.upper)).label; let rainToday; if (res.rain_today === 0) { rainToday = &#39;no rain today&#39;; } else { rainToday = `${res.rain_today}&#34; rain today`; } let pressureTrend = res.pressure_trend; let pressure = `${res.pressure}inhg and ${pressureTrend}`; let icon = res.icon; // display data document.getElementById(&#39;time&#39;).innerHTML = updateAge; document.getElementById(&#39;conditions&#39;).innerHTML = conditions; document.getElementById(&#39;temp&#39;).innerHTML = temp; document.getElementById(&#39;humidity&#39;).innerHTML = humidity; document.getElementById(&#39;wind&#39;).innerHTML = wind; document.getElementById(&#39;rainToday&#39;).innerHTML = rainToday; document.getElementById(&#39;pressure&#39;).innerHTML = pressure; // update icons document.getElementsByClassName(&#39;fa-cloud-sun-rain&#39;)[0].classList = CLASS_MAP_WX[icon]; document.getElementsByClassName(&#39;fa-temperature-half&#39;)[0].classList = CLASS_MAP_TEMP[tempLabel]; document.getElementsByClassName(&#39;fa-droplet-slash&#39;)[0].classList = CLASS_MAP_RAIN[rainLabel]; document.getElementsByClassName(&#39;fa-arrow-right-long&#39;)[0].classList = CLASS_MAP_PRESS[pressureTrend]; }); &lt;/script&gt; With that, I hit the happy little Save &amp; Publish button in the editor, and then revel in seeing my new weather display at the bottom of my omg.lol page↗:\nI'm quite pleased with how this turned out, and I had fun learning a few JavaScript tricks in the process. I stashed the code for this little project in my lolz repo↗ in case you'd like to take a look. I have some ideas for other APIs I might want to integrate in a similar way in the future so stay tuned.\nThese are degrees Fahrenheit, after all. If I needed precision I'd be using better units.&#160;&#x21a9;&#xfe0e;\nI'm such a sucker for basically anything that I can tie one of my domains to.&#160;&#x21a9;&#xfe0e;\nPer the docs↗, runs can run at most once every five minutes, and they may be delayed (or even canceled) if a runner isn't available due to heavy load.&#160;&#x21a9;&#xfe0e;\nMy dishonest values will just get corrected the next time the workflow is triggered.&#160;&#x21a9;&#xfe0e;\nOkay, admittedly this progression isn't perfect, but it's the best I could come up with within the free FA icon set.&#160;&#x21a9;&#xfe0e;\n",description:"Using a GitHub Actions workflow to retrieve data from an authenticated API, posting results to a publicly-accessible pastebin, and displaying them on a static web site.",url:"https://runtimeterror.dev/display-tempest-weather-static-site/"},"https://runtimeterror.dev/deploy-hugo-neocities-github-actions/":{title:"Deploying a Hugo Site to Neocities with GitHub Actions",tags:["cicd","hugo","meta","serverless"],content:`I came across Neocities↗ many months ago, and got really excited by the premise: a free web host with the mission to bring back the &quot;fun, creativity and independence that made the web great.&quot; I spent a while scrolling through the gallery↗ of personal sites and was amazed by both the nostalgic vibes and the creativity on display. It's like a portal back to when the web was fun. Neocities seemed like something I wanted to be a part of so I signed up for an account... and soon realized that I didn't really want to go back to crafting artisinal HTML by hand like I did in the early '00s. I didn't see an easy way to leverage my preferred static site generator1 so I filed it away and moved on.
Until yesterday, when I saw a post from Sophie↗ on How I deploy my Eleventy site to Neocities↗. I hadn't realized that Neocities had an API↗, or that there was a deploy-to-neocities↗ GitHub Action which uses that API to push content to Neocities. With that new-to-me information, I thought I'd give Neocities another try - a real one this time.
I had been hosting this site on Netlify's free plan for a couple of years and haven't really encountered any problems. But I saw Neocities as a better vision of the internet, and I wanted to be a part of that2. So last night I upgraded to the $5/month Neocities Supporter↗ plan which would let me use a custom domain for my site (along with higher storage and bandwidth limits).
I knew I'd need to make some changes to Sophie's workflow since my site is built with Hugo rather than Eleventy. I did some poking around and found GitHub Actions for Hugo↗ which would take care of installing Hugo for me. Then I'd just need to render the HTML with hugo --minify and use the Torchlight CLI to mark up the code blocks. Along the way, I also discovered that I'd need to overwrite /not_found.html to insert my custom 404 page so I included an extra step to do that. After that, I'll finally be ready to push the results to Neocities.
It took a bit of trial and error, but I eventually adapted this workflow which does the trick:
The Workflow # torchlight! {&#34;lineNumbers&#34;: true} # .github/workflows/deploy-to-neocities.yml name: Deploy to Neocities on: # Daily build to catch any future-dated posts schedule: - cron: 0 13 * * * # Build on pushes to the main branch only push: branches: - main concurrency: group: deploy-to-neocities cancel-in-progress: true defaults: run: shell: bash jobs: deploy: name: Build and deploy Hugo site runs-on: ubuntu-latest steps: # Install Hugo in the runner - name: Hugo setup uses: peaceiris/[email protected] with: hugo-version: &#39;0.121.1&#39; extended: true # Check out the source for the site - name: Checkout uses: actions/checkout@v4 with: submodules: recursive # Build the site with Hugo - name: Build with Hugo run: hugo --minify # Copy my custom 404 page to not_found.html so it # will be picked up by Neocities - name: Insert 404 page run: | cp public/404/index.html public/not_found.html # Highlight code blocks with the Torchlight CLI - name: Highlight with Torchlight run: | npm i @torchlight-api/torchlight-cli npx torchlight # Push the rendered site to Neocities and # clean up any orphaned files - name: Deploy to Neocities uses: bcomnes/deploy-to-neocities@v1 with: api_token: \${{ secrets.NEOCITIES_API_TOKEN }} cleanup: true dist_dir: public I'm thrilled with how well this works, and happy to have learned a bit more about GitHub Actions in the process. Big thanks to Sophie for pointing me in the right direction!
Also I'm kind of lazy, and not actually very good at web design anyway. I mean, you've seen my work.&#160;&#x21a9;&#xfe0e;
Plus I love supporting passion projects.&#160;&#x21a9;&#xfe0e;
`,description:"Using GitHub Actions to automatically deploy a Hugo website to Neocities.",url:"https://runtimeterror.dev/deploy-hugo-neocities-github-actions/"},"https://runtimeterror.dev/enable-fips-fix-aria-lifecycle/":{title:"Enabling FIPS Compliance Fixes Aria Lifecycle 8.14",tags:["vmware"],content:`This week, VMware posted VMSA-2024-0001↗ which details a critical (9.9/10) vulnerability in vRealize Aria Automation. While working to get our environment patched, I ran into an interesting error on our Aria Lifecycle appliance:
Error Code: LCMVRAVACONFIG590024 VMware Aria Automation hostname is not valid or unable to run the product specific commands via SSH on the host. Check if VMware Aria Automation is up and running. VMware Aria Automation hostname is not valid or unable to run the product specific commands via SSH on the host. Check if VMware Aria Automation is up and running. com.vmware.vrealize.lcm.drivers.vra80.exception.VraVaProductNotFoundException: Either provided hostname: &lt;VMwareAriaAutomationFQDN&gt; is not a valid VMware Aria Automation hostname or unable to run the product specific commands via SSH on the host. at com.vmware.vrealize.lcm.drivers.vra80.helpers.VraPreludeInstallHelper.getVraFullVersion(VraPreludeInstallHelper.java:970) at com.vmware.vrealize.lcm.drivers.vra80.helpers.VraPreludeInstallHelper.checkVraApplianceAndVersion(VraPreludeInstallHelper.java:978) at com.vmware.vrealize.lcm.drivers.vra80.helpers.VraPreludeInstallHelper.getVraProductDetails(VraPreludeInstallHelper.java:754) at com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaImportEnvironmentTask.execute(VraVaImportEnvironmentTask.java:145) at com.vmware.vrealize.lcm.platform.automata.service.Task.retry(Task.java:158) at com.vmware.vrealize.lcm.automata.core.TaskThread.run(TaskThread.java:60) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source) Digging further into the appliance logs revealed some more details:
Session.connect: java.security.spec.InvalidKeySpecException: key spec not recognized That seems like a much more insightful error than &quot;the hostname is not valid, dummy.&quot;
Anyhoo, searching for the error took me to a VMware KB on the subject:
VMware Aria Suite Lifecycle 8.14 Patch 1 Day 2 operations fail for VMware Aria Automation with error code LCMVRAVACONFIG590024 (96243)↗ After applying VMware Aria Suite Lifecycle 8.14 Patch 1, you may encounter deployment and day-2 operation failures, attributed to the elimination of weak algorithms in Suite Lifecycle. To prevent such issues, it is recommended to either turn on FIPS in VMware Aria Suite Lifecycle or implement the specified workarounds on other VMware Aria Products, as outlined in the article Steps for Removing SHA1 weak Algorithms/Ciphers from all VMware Aria Products.
That's right. According to the KB, the solution for the untrusted encryption algorithms is to enable FIPS compliance. I was skeptical: I've never seen FIPS enforcement fix problems, it always causes them.
But I gave it a shot, and holy crap it actually worked! Enabling FIPS compliance on the Aria Lifecycle appliance got things going again.
I feel like I've seen everything now.
`,description:"Never in my life have I seen enabling FIPS *fix* a problem - until now.",url:"https://runtimeterror.dev/enable-fips-fix-aria-lifecycle/"},"https://runtimeterror.dev/publish-services-cloudflare-tunnel/":{title:"Publish Services with Cloudflare Tunnel",tags:["cloudflare","cloud","containers","docker","networking","selfhosting"],content:`I've written a bit lately about how handy Tailscale Serve and Funnel can be, and I continue to get a lot of great use out of those features. But not every networking nail is best handled with a Tailscale-shaped hammer. Funnel has two limitations that might make it less than ideal for certain situations.
First, sites served with Funnel can only have a hostname in the form of server.tailnet-name.ts.net. You can't use a custom domain for this, but you might not always want to advertise that a service is shared via Tailscale. Second, Funnel connections have an undisclosed bandwidth limit, which could cause problems if you're hoping to serve media through the Funnel.
For instance, I've started using Immich↗ as a self-hosted alternative to Google Photos. Using Tailscale Serve to make my Immich server available on my Tailnet works beautifully, and I initially set up a Funnel connection to use for when I wanted to share certain photos, videos, and albums externally. I quickly realized that it took f o r e v e r to load the page when those links were shared publicly. I probably won't share a lot of those public links but I'd like them to be a bit more responsive when I do.
I went looking for another solution, and I found one in a suite of products I already use.
Overview I've been using Cloudflare's generious free plan↗ for DNS, content caching, page/domain redirects, email forwarding, and DDoS mitigation1 across my dozen or so domains. In addition to these &quot;basic&quot; services and features, Cloudflare also offers a selection of Zero Trust Network Access↗ products, and one of those is Cloudflare Tunnel↗ - also available with a generous free plan.
In some ways, Cloudflare Tunnel is quite similar to Tailscale Funnel. Both provide a secure way to publish a resource on the internet without requiring a public IP address, port forwarding, or firewall configuration. Both use a lightweight agent on your server to establish an encrypted outbound tunnel, and inbound traffic gets routed into that tunnel through the provider's network. And both solutions automatically provision trusted SSL certificates to keep traffic safe and browsers happy.
Tailscale Funnel is very easy to set up, and it doesn't require any additional infrastructure, not even a domain name. There aren't a lot of controls available with Funnel - it's basically on or off, and bound to one of three port numbers. You don't get to pick the domain name where it's served, just the hostname of the Tailscale node - and if you want to share multiple resources on the same host you'll need to get creative. I think this approach is really ideal for quick development and testing scenarios.
For longer-term, more production-like use, Cloudflare Tunnels is a pretty great fit. It ties in well with existing Cloudflare services, doesn't enforce a reduced bandwidth limit, and provides a bit more flexibility for how your resource will be presented to the web. It can also integrate tightly with the rest of Cloudflare's Zero Trust offerings to easily provide access controls to further protect your resource. It does, however, require a custom domain managed with Cloudflare DNS in order to work2.
For my specific Immich use case, I decided to share my instance via Tailscale Serve for internal access and Cloudflare Tunnel for public shares, and I used a similar sidecar approach to make it work without too much fuss. For the purposes of this blog post, though, I'm going to run through a less complicated example3.
Speedtest Demo I'm going to deploy a quick SpeedTest by OpenSpeedTest↗ container, and proxy it with both Tailscale Funnel and Cloudflare Tunnel so that I can compare the bandwidth of the two tunnel solutions directly.
I'll start with a very basic Docker Compose definition for just the Speedtest container:
# torchlight! {&#34;lineNumbers&#34;:true} # docker-compose.yml services: speedtest: image: openspeedtest/latest container_name: speedtest restart: unless-stopped Tailscale Funnel And, as in my last post I'll add in my Tailscale sidecar to enable funnel:
# torchlight! {&#34;lineNumbers&#34;:true} # docker-compose.yml services: speedtest: image: openspeedtest/latest container_name: speedtest restart: unless-stopped network_mode: service:tailscale # [tl! ++:start focus:start] tailscale: image: ghcr.io/jbowdre/tailscale-docker:latest container_name: speedtest-tailscaled restart: unless-stopped environment: TS_AUTHKEY: \${TS_AUTHKEY:?err} TS_HOSTNAME: \${TS_HOSTNAME:-tailscale-sidecar} TS_STATE_DIR: &#34;/var/lib/tailscale/&#34; TS_EXTRA_ARGS: \${TS_EXTRA_ARGS:-} TS_SERVE_PORT: \${TS_SERVE_PORT:-} TS_FUNNEL: \${TS_FUNNEL:-} # [tl! ++:end focus:end] Network Mode
I set network_mode: service:tailscale on the speedtest container so that it will share its network interface with the tailscale container. This allows Tailscale Serve/Funnel to proxy speedtest at http://localhost:3000, which is nice since Tailscale doesn't currently/officially support proxying remote hosts.
I'll set up a new auth key in the Tailscale Admin Portal↗, and insert that (along with hostname, port, and funnel configs) into my .env file:
# torchlight! {&#34;lineNumbers&#34;:true} # .env TS_AUTHKEY=tskey-auth-somestring-somelongerstring TS_HOSTNAME=speedtest TS_EXTRA_ARGS=--ssh TS_SERVE_PORT=3000 # the port the speedtest runs on by default TS_FUNNEL=true A quick docker compose up -d and my new speedtest is alive!
First I'll hit it at http://speedtest.tailnet-name.ts.net:3000 to access it purely inside of my Tailnet: Not bad! Now let's see what happens when I disable Tailscale on my laptop and hit the public Funnel endpoint at https://speedtest.tailnet-name.ts.net: Oof. Routing traffic through the Funnel dropped the download by ~25% and the upload by ~90%, not to mention the significant ping increase.
Cloudflare Tunnel Alright, let's throw a Cloudflare Tunnel on there and see what happens.
To start that process, I'll log into my Cloudflare dashboard↗ and then use the side navigation to make my way to the Zero Trust (AKA &quot;Cloudflare One&quot;) area. From there, I'll drill down through Access -&gt; Tunnels and click on + Create a tunnel. I'll give it an appropriate name like speedtest and then click Save tunnel.
Now Cloudflare helpfully provides installation instructions for a variety of different platforms. I'm doing that Docker thing so I'll click the appropriate button and review that command snippet: I can easily adapt that and add it to my Docker Compose setup4:
# torchlight! {&#34;lineNumbers&#34;:true} # docker-compose.yml services: speedtest: image: openspeedtest/latest container_name: speedtest restart: unless-stopped network_mode: service:tailscale tailscale: # [tl! collapse:start] image: ghcr.io/jbowdre/tailscale-docker:latest container_name: speedtest-tailscaled restart: unless-stopped environment: TS_AUTHKEY: \${TS_AUTHKEY:?err} TS_HOSTNAME: \${TS_HOSTNAME:-tailscale} TS_STATE_DIR: &#34;/var/lib/tailscale/&#34; TS_EXTRA_ARGS: \${TS_EXTRA_ARGS:-} TS_SERVE_PORT: \${TS_SERVE_PORT:-} TS_FUNNEL: \${TS_FUNNEL:-} # [tl! collapse:end] cloudflared: # [tl! ++:start focus:start] image: cloudflare/cloudflared container_name: speedtest-cloudflared restart: unless-stopped command: - tunnel - --no-autoupdate - run - --token - \${CLOUDFLARED_TOKEN} network_mode: service:tailscale # [tl! ++:end focus:end] After dropping the value for CLOUDFLARED_TOKEN into my .env file, I can do another docker compose up -d to bring this online - and that status will be reflected back on the config page as well: I'll click Next and proceed with the rest of the configuration, which consists of picking a public hostname for the frontend and defining the private service for the backend: I can click Save tunnel and... that's it. My tunnel is live, and I can now reach my speedtest at https://speedtest.runtimeterror.dev. Let's see how it does: So that's much faster than Tailscale Funnel, and even faster than a direct transfer within the Tailnet. Cloudflare Tunnel should work quite nicely for sharing photos publicly from my Immich instance.
Bonus: Access Control But what if I don't want just anyone to be able to use my new speedtest (or access my Immich instance)? Defining an application in Cloudflare One will let me set some limits.
So I'll go to Access -&gt; Applications and select that I'm adding a Self-hosted application. I can then do the basic configuration, basically just telling Cloudflare that I'd like to protect the https://speedtest.runtimeterror.dev app: I can leave the rest of that page with the default selections so I'll scroll down and click Next.
Now I need to create a policy to apply to this application. I'm going to be simple and just say that anyone with an @runtimeterror.dev email address should be able to use my speedtest: Without any external identity providers connected, Cloudflare will default to requiring authentication via a one-time PIN sent to an input email address. That's pretty easy, and it pairs well with allowing access based on email address attributes. There are a bunch of other options I could configure if I wanted... but my needs are simple so I'll just click through and save this new application config.
Now, if I try to visit my speedtest with a new session I'll get automatically routed to the Cloudflare Access challenge which will prompt for my email address. If my email is on the approved list (that is, if it ends with @runtimeterror.dev), I'll get emailed a code which I can then use to log in and access the speedtest. If not, I won't get in. And since this thing is served through a Cloudflare Tunnel (rather than a public IP address merely advertised via DNS) there isn't any way to bypass Cloudflare's authentication challenge.
Conclusion This has been a quick demo of how easy it is to configure a Cloudflare Tunnel to securely publish resources on the web. I really like being able to share a service publicly without having to manage DNS, port-forwarding, or firewall configurations, and the ability to offload authentication and authorization to Cloudflare is a big plus. I still don't think Tailscale can be beat for sharing stuff internally, but I think Cloudflare Tunnels make more sense for long-term public sharing. And it's awesome that I can run both solutions side-by-side to really get the best of both when I need it.
And a ton of other things I'm forgetting right now.&#160;&#x21a9;&#xfe0e;
Cloudflare Tunnel lets you choose what hostname and domain name should be used for fronting your tunnel, and it even takes care of configuring the required DNS record automagically.&#160;&#x21a9;&#xfe0e;
My Immich stack is using ~10 containers and I don't really feel like documenting that all here - not yet, at least.&#160;&#x21a9;&#xfe0e;
Setting the network_mode isn't strictly necessary for the cloudflared container since Cloudflare Tunnel does support proxying remote hosts, but I'll just stick with it here for consistency.&#160;&#x21a9;&#xfe0e;
`,description:"Exploring Cloudflare Tunnel as an alternative to Tailscale Funnel for secure public access to internal resources.",url:"https://runtimeterror.dev/publish-services-cloudflare-tunnel/"},"https://runtimeterror.dev/tailscale-serve-docker-compose-sidecar/":{title:"Tailscale Serve in a Docker Compose Sidecar",tags:["containers","docker","selfhosting","tailscale"],content:`Hi, and welcome back to what has become my Tailscale blog.
I have a few servers that I use for running multiple container workloads. My approach in the past had been to use Caddy webserver↗ on the host to proxy the various containers. With this setup, each app would have its own DNS record, and Caddy would be configured to route traffic to the appropriate internal port based on that. For instance:
# torchlight! {&#34;lineNumbers&#34;: true} cyberchef.runtimeterror.dev { reverse_proxy localhost:8000 } ntfy.runtimeterror.dev, http://ntfy.runtimeterror.dev { reverse_proxy localhost:8080 @httpget { protocol http method GET path_regexp ^/([-_a-z0-9]{0,64}$|docs/|static/) } redir @httpget https://{host}{uri} } uptime.runtimeterror.dev { reverse_proxy localhost:3001 } miniflux.runtimeterror.dev { reverse_proxy localhost:8080 } and so on... You get the idea. This approach works well for services I want/need to be public, but it does require me to manage those DNS records and keep track of which app is on which port. That can be kind of tedious.
And I don't really need all of these services to be public. Not because they're particularly sensitive, but I just don't really have a reason to share my personal Miniflux↗ or CyberChef↗ instances with the world at large. Those would be great candidates to proxy with Tailscale Serve so they'd only be available on my tailnet. Of course, with that setup I'd then have to differentiate the services based on external port numbers since they'd all be served with the same hostname. That's not ideal either.
sudo tailscale serve --bg --https 8443 8180 # [tl! .cmd] Available within your tailnet: # [tl! .nocopy:6] https://tsdemo.tailnet-name.ts.net/ |-- proxy http://127.0.0.1:8000 https://tsdemo.tailnet-name.ts.net:8443/ |-- proxy http://127.0.0.1:8080 It would be really great if I could directly attach each container to my tailnet and then access the apps with addresses like https://miniflux.tailnet-name.ts.net or https://cyber.tailnet-name.ts.net. Tailscale does have an official Docker image↗, and at first glance it seems like that would solve my needs pretty directly. Unfortunately, it looks like trying to leverage that container image directly would still require me to configure Tailscale Serve interactively.1.
Update: 2024-02-07
Tailscale just published a blog post↗ which shares some details about how to configure Funnel and Serve within the official image. The short version is that the TS_SERVE_CONFIG variable should point to a serve-config.json file. The name of the file doesn't actually matter, but the contents do - and you can generate a config by running tailscale serve status -json on a functioning system... or just copy-pasta'ing this example I just made for the Cyberchef setup I describe later in this post:
// torchlight! {&#34;lineNumbers&#34;: true} { &#34;TCP&#34;: { &#34;443&#34;: { &#34;HTTPS&#34;: true } }, &#34;Web&#34;: { &#34;cyber.tailnet-name.ts.net:443&#34;: { &#34;Handlers&#34;: { &#34;/&#34;: { &#34;Proxy&#34;: &#34;http://127.0.0.1:8000&#34; } } } }//, uncomment to enable funnel // &#34;AllowFunnel&#34;: { // &#34;cyber.tailnet-name.ts.net:443&#34;: true // } } Replace the ports and protocols and hostnames and such, and you'll be good to go.
A compose config using this setup might look something like this:
# torchlight! {&#34;lineNumbers&#34;: true} services: tailscale: image: tailscale/tailscale:latest # [tl! highlight] container_name: cyberchef-tailscale restart: unless-stopped environment: TS_AUTHKEY: \${TS_AUTHKEY:?err} TS_HOSTNAME: \${TS_HOSTNAME:-ts-docker} TS_EXTRA_ARGS: \${TS_EXTRA_ARGS:-} TS_STATE_DIR: /var/lib/tailscale/ TS_SERVE_CONFIG: /config/serve-config.json # [tl! highlight] volumes: - ./ts_data:/var/lib/tailscale/ - ./serve-config.json:/config/serve-config.json # [tl! highlight] cyberchef: container_name: cyberchef image: mpepping/cyberchef:latest restart: unless-stopped network_mode: service:tailscale That's a bit cleaner than the workaround I'd put together, but you're totally welcome to keep on reading if you want to see how it compares.
And then I came across Louis-Philippe Asselin's post↗ about how he set up Tailscale in Docker Compose. When he wrote his post, there was even less documentation on how to do this stuff, so he used a modified Tailscale docker image↗ which loads a startup script↗ to handle some of the configuration steps. His repo also includes a helpful docker-compose example↗ of how to connect it together.
I quickly realized I could modify his startup script to take care of my Tailscale Serve need. So here's how I did it.
Docker Image My image starts out basically the same as Louis-Philippe's, with just pulling in the official image and then adding the customized script:
# torchlight! {&#34;lineNumbers&#34;: true} FROM tailscale/tailscale:v1.56.1 COPY start.sh /usr/bin/start.sh RUN chmod +x /usr/bin/start.sh CMD [&#34;/usr/bin/start.sh&#34;] My start.sh script has a few tweaks for brevity/clarity, and also adds a block for conditionally enabling a basic Tailscale Serve (or Funnel) configuration:
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/ash trap &#39;kill -TERM $PID&#39; TERM INT echo &#34;Starting Tailscale daemon&#34; tailscaled --tun=userspace-networking --statedir=&#34;\${TS_STATE_DIR}&#34; \${TS_TAILSCALED_EXTRA_ARGS} &amp; PID=$! until tailscale up --authkey=&#34;\${TS_AUTHKEY}&#34; --hostname=&#34;\${TS_HOSTNAME}&#34; \${TS_EXTRA_ARGS}; do sleep 0.1 done tailscale status if [ -n &#34;\${TS_SERVE_PORT}&#34; ]; then # [tl! ++:10] if [ -n &#34;\${TS_FUNNEL}&#34; ]; then if ! tailscale funnel status | grep -q -A1 &#39;(Funnel on)&#39; | grep -q &#34;\${TS_SERVE_PORT}&#34;; then tailscale funnel --bg &#34;\${TS_SERVE_PORT}&#34; fi else if ! tailscale serve status | grep -q &#34;\${TS_SERVE_PORT}&#34;; then tailscale serve --bg &#34;\${TS_SERVE_PORT}&#34; fi fi fi wait \${PID} This script starts the tailscaled daemon in userspace mode, and it tells the daemon to store its state in a user-defined location. It then uses a supplied pre-auth key↗ to bring up the new Tailscale node and set the hostname.
If both TS_SERVE_PORT and TS_FUNNEL are set, the script will publicly proxy the designated port with Tailscale Funnel. If only TS_SERVE_PORT is set, it will just proxy it internal to the tailnet with Tailscale Serve.2
I'm using this git repo↗ to track my work on this, and it automatically builds my tailscale-docker↗ image. So now I can can simply reference ghcr.io/jbowdre/tailscale-docker in my Docker configurations.
On that note...
Compose Configuration There's also a sample docker-compose.yml↗ in the repo to show how to use the image:
# torchlight! {&#34;lineNumbers&#34;: true} services: tailscale: image: ghcr.io/jbowdre/tailscale-docker:latest restart: unless-stopped container_name: tailscale environment: TS_AUTHKEY: \${TS_AUTHKEY:?err} # from https://login.tailscale.com/admin/settings/authkeys TS_HOSTNAME: \${TS_HOSTNAME:-ts-docker} # optional hostname to use for this node TS_STATE_DIR: &#34;/var/lib/tailscale/&#34; # store ts state in a local volume TS_TAILSCALED_EXTRA_ARGS: \${TS_TAILSCALED_EXTRA_ARGS:-} # optional extra args to pass to tailscaled TS_EXTRA_ARGS: \${TS_EXTRA_ARGS:-} # optional extra flags to pass to tailscale up TS_SERVE_PORT: \${TS_SERVE_PORT:-} # optional port to proxy with tailscale serve (ex: &#39;80&#39;) TS_FUNNEL: \${TS_FUNNEL:-} # if set, serve publicly with tailscale funnel volumes: - ./ts_data:/var/lib/tailscale/ # the mount point should match TS_STATE_DIR myservice: image: nginxdemos/hello restart: unless-stopped network_mode: &#34;service:tailscale&#34; # use the tailscale network service&#39;s network You'll note that most of those environment variables aren't actually defined in this YAML. Instead, they'll be inherited from the environment used for spawning the containers. This provides a few benefits. First, it lets the tailscale service definition block function as a template to allow copying it into other Compose files without having to modify. Second, it avoids holding sensitive data in the YAML itself. And third, it allows us to set default values for undefined variables (if TS_HOSTNAME is empty it will be automatically replaced with ts-docker) or throw an error if a required value isn't set (an empty TS_AUTHKEY will throw an error and abort).
You can create the required variables by exporting them at the command line (export TS_HOSTNAME=ts-docker) - but that runs the risk of having sensitive values like an authkey stored in your shell history. It's not a great habit.
Perhaps a better approach is to set the variables in a .env file stored alongside the docker-compose.yaml but with stricter permissions. This file can be owned and only readable by root (or the defined Docker user), while the Compose file can be owned by your own user or the docker group.
Here's how the .env for this setup might look:
# torchlight! {&#34;lineNumbers&#34;: true} TS_AUTHKEY=tskey-auth-somestring-somelongerstring TS_HOSTNAME=tsdemo TS_TAILSCALED_EXTRA_ARGS=--verbose=1 TS_EXTRA_ARGS=--ssh TS_SERVE_PORT=8080 TS_FUNNEL=1 Variable Name Example Description TS_AUTHKEY tskey-auth-somestring-somelongerstring used for unattended auth of the new node, get one here↗ TS_HOSTNAME tsdemo optional Tailscale hostname for the new node3 TS_STATE_DIR /var/lib/tailscale/ required directory for storing Tailscale state, this should be mounted to the container for persistence TS_TAILSCALED_EXTRA_ARGS --verbose=14 optional additional flags↗ for tailscaled TS_EXTRA_ARGS --ssh5 optional additional flags↗ for tailscale up TS_SERVE_PORT 8080 optional application port to expose with Tailscale Serve↗ TS_FUNNEL 1 if set (to anything), will proxy TS_SERVE_PORT publicly with Tailscale Funnel↗ A few implementation notes:
If you want to use Funnel with this configuration, it might be a good idea to associate the Funnel ACL policy↗ with a tag (like tag:funnel), as I discussed a bit here. And then when you create the pre-auth key↗, you can set it to automatically apply the tag so it can enable Funnel. It's very important that the path designated by TS_STATE_DIR is a volume mounted into the container. Otherwise, the container will lose its Tailscale configuration when it stops. That could be inconvenient. Linking network_mode on the application container back to the service:tailscale definition is the magic↗ that lets the sidecar proxy traffic for the app. This way the two containers effectively share the same network interface, allowing them to share the same ports. So port 8080 on the app container is available on the tailscale container, and that enables tailscale serve --bg 8080 to work. Usage Examples To tie this all together, I'm going to quickly run through the steps I took to create and publish two container-based services without having to do any interactive configuration.
CyberChef I'll start with my CyberChef↗ instance.
CyberChef is a simple, intuitive web app for carrying out all manner of &quot;cyber&quot; operations within a web browser. These operations include simple encoding like XOR and Base64, more complex encryption like AES, DES and Blowfish, creating binary and hexdumps, compression and decompression of data, calculating hashes and checksums, IPv6 and X.509 parsing, changing character encodings, and much more.
This will be served publicly with Funnel so that my friends can use this instance if they need it.
I'll need a pre-auth key so that the Tailscale container can authenticate to my Tailnet. I can get that by going to the Tailscale Admin Portal↗ and generating a new auth key. I gave it a description, ticked the option to pre-approve whatever device authenticates with this key (since I have Device Approval↗ enabled on my tailnet). I also used the option to auto-apply the tag:internal tag I used for grouping my on-prem systems as well as the tag:funnel tag I use for approving Funnel devices in the ACL.
That gives me a new single-use authkey:
I'll use that new key as well as the knowledge that CyberChef is served by default on port 8000 to create an appropriate .env file:
# torchlight! {&#34;lineNumbers&#34;: true} TS_AUTHKEY=tskey-auth-somestring-somelongerstring TS_HOSTNAME=cyber TS_EXTRA_ARGS=--ssh TS_SERVE_PORT=8000 TS_FUNNEL=true And I can add the corresponding docker-compose.yml to go with it:
# torchlight! {&#34;lineNumbers&#34;: true} services: tailscale: # [tl! focus:start] image: ghcr.io/jbowdre/tailscale-docker:latest restart: unless-stopped container_name: cyberchef-tailscale environment: TS_AUTHKEY: \${TS_AUTHKEY:?err} TS_HOSTNAME: \${TS_HOSTNAME:-ts-docker} TS_STATE_DIR: &#34;/var/lib/tailscale/&#34; TS_TAILSCALED_EXTRA_ARGS: \${TS_TAILSCALED_EXTRA_ARGS:-} TS_EXTRA_ARGS: \${TS_EXTRA_ARGS:-} TS_SERVE_PORT: \${TS_SERVE_PORT:-} TS_FUNNEL: \${TS_FUNNEL:-} volumes: - ./ts_data:/var/lib/tailscale/ # [tl! focus:end] cyberchef: container_name: cyberchef image: mpepping/cyberchef:latest restart: unless-stopped network_mode: service:tailscale # [tl! focus] I can just bring it online like so:
docker compose up -d # [tl! .cmd .nocopy:1,4] [+] Running 3/3 ✔ Network cyberchef_default Created ✔ Container cyberchef-tailscale Started ✔ Container cyberchef Started I can review the logs for the tailscale service to confirm that the Funnel configuration was applied:
docker compose logs tailscale # [tl! .cmd .nocopy:1,12 focus] cyberchef-tailscale | # Health check: cyberchef-tailscale | # - not connected to home DERP region 12 cyberchef-tailscale | # - Some peers are advertising routes but --accept-routes is false cyberchef-tailscale | 2023/12/30 17:44:48 serve: creating a new proxy handler for http://127.0.0.1:8000 cyberchef-tailscale | 2023/12/30 17:44:48 Hostinfo.WireIngress changed to true cyberchef-tailscale | Available on the internet: # [tl! focus:6] cyberchef-tailscale | cyberchef-tailscale | https://cyber.tailnet-name.ts.net/ cyberchef-tailscale | |-- proxy http://127.0.0.1:8000 cyberchef-tailscale | cyberchef-tailscale | Funnel started and running in the background. cyberchef-tailscale | To disable the proxy, run: tailscale funnel --https=443 off And after ~10 minutes or so (it sometimes takes a bit longer for the DNS and SSL to start working outside the tailnet), I'll be able to hit the instance at https://cyber.tailnet-name.ts.net from anywhere on the web.
Miniflux I've lately been playing quite a bit with my omg.lol address↗ and associated services↗, and that's inspired me to revisit the world↗ of curating RSS feeds instead of relying on algorithms to keep me informed. Through that experience, I recently found Miniflux↗, a &quot;Minimalist and opinionated feed reader&quot;. It's written in Go, is fast and lightweight, and works really well as a PWA installed on mobile devices, too.
It will be great for keeping track of my feeds, but I need to expose this service publicly. So I'll serve it up inside my tailnet with Tailscale Serve.
Here's the .env that I'll use:
# torchlight! {&#34;lineNumbers&#34;: true} DB_USER=db-username DB_PASS=db-passw0rd ADMIN_USER=sysadmin ADMIN_PASS=hunter2 TS_AUTHKEY=tskey-auth-somestring-somelongerstring TS_HOSTNAME=miniflux TS_EXTRA_ARGS=--ssh TS_SERVE_PORT=8080 Funnel will not be configured for this since TS_FUNNEL was not defined.
I adapted the example docker-compose.yml↗ from Miniflux to add in my Tailscale bits:
# torchlight! {&#34;lineNumbers&#34;: true} services: tailscale: # [tl! focus:start] image: ghcr.io/jbowdre/tailscale-docker:latest restart: unless-stopped container_name: miniflux-tailscale environment: TS_AUTHKEY: \${TS_AUTHKEY:?err} TS_HOSTNAME: \${TS_HOSTNAME:-ts-docker} TS_STATE_DIR: &#34;/var/lib/tailscale/&#34; TS_TAILSCALED_EXTRA_ARGS: \${TS_TAILSCALED_EXTRA_ARGS:-} TS_EXTRA_ARGS: \${TS_EXTRA_ARGS:-} TS_SERVE_PORT: \${TS_SERVE_PORT:-} TS_FUNNEL: \${TS_FUNNEL:-} volumes: - ./ts_data:/var/lib/tailscale/ # [tl! focus:end] miniflux: image: miniflux/miniflux:latest restart: unless-stopped container_name: miniflux depends_on: db: condition: service_healthy environment: - DATABASE_URL=postgres://\${DB_USER}:\${DB_PASS}@db/miniflux?sslmode=disable - RUN_MIGRATIONS=1 - CREATE_ADMIN=1 - ADMIN_USERNAME=\${ADMIN_USER} - ADMIN_PASSWORD=\${ADMIN_PASS} network_mode: &#34;service:tailscale&#34; # [tl! focus] db: image: postgres:15 restart: unless-stopped container_name: miniflux-db environment: - POSTGRES_USER=\${DB_USER} - POSTGRES_PASSWORD=\${DB_PASS} volumes: - ./mf_data:/var/lib/postgresql/data healthcheck: test: [&#34;CMD&#34;, &#34;pg_isready&#34;, &#34;-U&#34;, &#34;\${DB_USER}&#34;] interval: 10s start_period: 30s I can bring it up with:
docker compose up -d # [tl! .cmd .nocopy:1,5] [+] Running 4/4 ✔ Network miniflux_default Created ✔ Container miniflux-db Started ✔ Container miniflux-tailscale Started ✔ Container miniflux Created And I can hit it at https://miniflux.tailnet-name.ts.net from within my tailnet:
Nice, right? Now to just convert all of my other containerized apps that don't really need to be public. Fortunately that shouldn't take too long since I've got this nice, portable, repeatable Docker Compose setup I can use.
Maybe I'll write about something other than Tailscale soon. Stay tuned!
While not documented for the image itself, the containerboot binary seems like it should accept a TS_SERVE_CONFIG argument↗ to designate the file path of the ipn.ServeConfig... but I couldn't find any information on how to actually configure that.&#160;&#x21a9;&#xfe0e;
If neither variable is set, the script just brings up Tailscale like normal... in which case you might as well just use the official image.&#160;&#x21a9;&#xfe0e;
This hostname will determine the fully-qualified domain name where the resource will be served: https://[hostname].[tailnet-name].ts.net. So you'll want to make sure it's a good one for what you're trying to do.&#160;&#x21a9;&#xfe0e;
Passing the --verbose flag to tailscaled increases the logging verbosity, which can be helpful if you need to troubleshoot.&#160;&#x21a9;&#xfe0e;
The --ssh flag to tailscale up will enable Tailscale SSH and (ACLs permitting) allow you to easily SSH directly into the Tailscale container without having to talk to the Docker host and spawn a shell from there.&#160;&#x21a9;&#xfe0e;
`,description:"Using Docker Compose to deploy containerized applications and make them available via Tailscale Serve and Tailscale Funnel",url:"https://runtimeterror.dev/tailscale-serve-docker-compose-sidecar/"},"https://runtimeterror.dev/salt-state-netdata-tailscale/":{title:"Quick Salt State to Deploy Netdata",tags:["homelab","iac","linux","salt","tailscale"],content:`As a follow-up to my recent explorations with using Tailscale Serve to make netdata↗ monitoring readily available on my tailnet↗, I wanted a quick way to reproduce that configuration across my handful of systems. These systems already have Tailscale installed↗ and configured, and they're all managed with Salt↗.
So here's a hasty Salt state that I used to make it happen.
It simply installs netdata using the handy-dandy kickstart script↗, and then configures Tailscale to Serve the netdata instance (with a trusted cert!) inside my tailnet over https://[hostname].[tailnet-name].ts.net:8443/netdata.
# torchlight! {&#34;lineNumbers&#34;: true} # -*- coding: utf-8 -*- # vim: ft=sls # Hasty Salt config to install Netdata and make it available within a tailnet # at https://[hostname].[tailnet-name].ts.net:8443/netdata curl: pkg.installed tailscale: pkg.installed: - version: latest netdata-kickstart: cmd.run: - name: curl -Ss https://get.netdata.cloud/kickstart.sh | sh -s -- --dont-wait - require: - pkg: curl # don&#39;t run this block if netdata is already running - unless: pgrep netdata tailscale-serve: cmd.run: - name: tailscale serve --bg --https 8443 --set-path /netdata 19999 - require: - pkg: tailscale - cmd: netdata-kickstart # don&#39;t run this if netdata is already tailscale-served - unless: tailscale serve status | grep -q &#39;/netdata proxy http://127.0.0.1:19999&#39; It's not super elegant... but it got the job done, and that's all I needed it to do.
sudo salt &#39;minion-name&#39; state.apply netdata # [tl! .cmd focus] minion-name: # [tl! .nocopy:start collapse:start] ---------- ID: curl Function: pkg.installed Result: True Comment: All specified packages are already installed Started: 22:59:00.821329 Duration: 28.639 ms Changes: ---------- ID: tailscale Function: pkg.installed Result: True Comment: All specified packages are already installed and are at the desired version Started: 22:59:00.850083 Duration: 4589.765 ms Changes: ---------- ID: netdata-kickstart Function: cmd.run Name: curl -Ss https://get.netdata.cloud/kickstart.sh | sh -s -- --dont-wait Result: True Comment: Command &#34;curl -Ss https://get.netdata.cloud/kickstart.sh | sh -s -- --dont-wait&#34; run Started: 22:59:05.441217 Duration: 10617.082 ms Changes: ---------- pid: 169287 retcode: 0 stderr: sh: 19: cd: can&#39;t cd to sh --- Using /tmp/netdata-kickstart-ZtqZcfWuqk as a temporary directory. --- --- Checking for existing installations of Netdata... --- --- No existing installations of netdata found, assuming this is a fresh install. --- --- Attempting to install using native packages... --- --- Repository configuration is already present, attempting to install netdata. --- [/tmp/netdata-kickstart-ZtqZcfWuqk]# env DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::=--force-confdef -o Dpkg::Options::=--force-confold install -y netdata OK [/tmp/netdata-kickstart-ZtqZcfWuqk]# test -x //usr/libexec/netdata/netdata-updater.sh OK [/tmp/netdata-kickstart-ZtqZcfWuqk]# grep -q \\-\\-enable-auto-updates //usr/libexec/netdata/netdata-updater.sh OK [/tmp/netdata-kickstart-ZtqZcfWuqk]# //usr/libexec/netdata/netdata-updater.sh --enable-auto-updates Thu Dec 21 22:59:15 UTC 2023 : INFO: netdata-updater.sh: Auto-updating has been ENABLED through cron, updater script linked to /etc/cron.daily/netdata-updater Thu Dec 21 22:59:15 UTC 2023 : INFO: netdata-updater.sh: If the update process fails and you have email notifications set up correctly for cron on this system, you should receive an email notification of the failure. Thu Dec 21 22:59:15 UTC 2023 : INFO: netdata-updater.sh: Successful updates will not send an email. OK Successfully installed the Netdata Agent. Official documentation can be found online at https://learn.netdata.cloud/docs/. Looking to monitor all of your infrastructure with Netdata? Check out Netdata Cloud at https://app.netdata.cloud. Join our community and connect with us on: - GitHub: https://github.com/netdata/netdata/discussions - Discord: https://discord.gg/5ygS846fR6 - Our community forums: https://community.netdata.cloud/ [/tmp/netdata-kickstart-ZtqZcfWuqk]# rm -rf /tmp/netdata-kickstart-ZtqZcfWuqk OK stdout: Reading package lists... Building dependency tree... Reading state information... The following packages were automatically installed and are no longer required: libnorm1 libpgm-5.2-0 libxmlb1 libzmq5 python3-contextvars python3-croniter python3-dateutil python3-gnupg python3-immutables python3-jmespath python3-msgpack python3-psutil python3-pycryptodome python3-tz python3-zmq Use &#39;apt autoremove&#39; to remove them. The following additional packages will be installed: netdata-ebpf-code-legacy netdata-plugin-apps netdata-plugin-chartsd netdata-plugin-debugfs netdata-plugin-ebpf netdata-plugin-go netdata-plugin-logs-management netdata-plugin-nfacct netdata-plugin-perf netdata-plugin-pythond netdata-plugin-slabinfo netdata-plugin-systemd-journal Suggested packages: netdata-plugin-cups netdata-plugin-freeipmi apcupsd nut nvme-cli The following NEW packages will be installed: netdata netdata-ebpf-code-legacy netdata-plugin-apps netdata-plugin-chartsd netdata-plugin-debugfs netdata-plugin-ebpf netdata-plugin-go netdata-plugin-logs-management netdata-plugin-nfacct netdata-plugin-perf netdata-plugin-pythond netdata-plugin-slabinfo netdata-plugin-systemd-journal 0 upgraded, 13 newly installed, 0 to remove and 11 not upgraded. Need to get 0 B/30.7 MB of archives. After this operation, 154 MB of additional disk space will be used. Selecting previously unselected package netdata-ebpf-code-legacy. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118906 files and directories currently installed.) Preparing to unpack .../00-netdata-ebpf-code-legacy_1.44.0-77-nightly_amd64.deb ... Unpacking netdata-ebpf-code-legacy (1.44.0-77-nightly) ... Selecting previously unselected package netdata-plugin-ebpf. Preparing to unpack .../01-netdata-plugin-ebpf_1.44.0-77-nightly_amd64.deb ... Unpacking netdata-plugin-ebpf (1.44.0-77-nightly) ... Selecting previously unselected package netdata-plugin-apps. Preparing to unpack .../02-netdata-plugin-apps_1.44.0-77-nightly_amd64.deb ... Unpacking netdata-plugin-apps (1.44.0-77-nightly) ... Selecting previously unselected package netdata-plugin-pythond. Preparing to unpack .../03-netdata-plugin-pythond_1.44.0-77-nightly_all.deb ... Unpacking netdata-plugin-pythond (1.44.0-77-nightly) ... Selecting previously unselected package netdata-plugin-go. Preparing to unpack .../04-netdata-plugin-go_1.44.0-77-nightly_amd64.deb ... Unpacking netdata-plugin-go (1.44.0-77-nightly) ... Selecting previously unselected package netdata-plugin-debugfs. Preparing to unpack .../05-netdata-plugin-debugfs_1.44.0-77-nightly_amd64.deb ... Unpacking netdata-plugin-debugfs (1.44.0-77-nightly) ... Selecting previously unselected package netdata-plugin-nfacct. Preparing to unpack .../06-netdata-plugin-nfacct_1.44.0-77-nightly_amd64.deb ... Unpacking netdata-plugin-nfacct (1.44.0-77-nightly) ... Selecting previously unselected package netdata-plugin-chartsd. Preparing to unpack .../07-netdata-plugin-chartsd_1.44.0-77-nightly_all.deb ... Unpacking netdata-plugin-chartsd (1.44.0-77-nightly) ... Selecting previously unselected package netdata-plugin-slabinfo. Preparing to unpack .../08-netdata-plugin-slabinfo_1.44.0-77-nightly_amd64.deb ... Unpacking netdata-plugin-slabinfo (1.44.0-77-nightly) ... Selecting previously unselected package netdata-plugin-perf. Preparing to unpack .../09-netdata-plugin-perf_1.44.0-77-nightly_amd64.deb ... Unpacking netdata-plugin-perf (1.44.0-77-nightly) ... Selecting previously unselected package netdata. Preparing to unpack .../10-netdata_1.44.0-77-nightly_amd64.deb ... Unpacking netdata (1.44.0-77-nightly) ... Selecting previously unselected package netdata-plugin-logs-management. Preparing to unpack .../11-netdata-plugin-logs-management_1.44.0-77-nightly_amd64.deb ... Unpacking netdata-plugin-logs-management (1.44.0-77-nightly) ... Selecting previously unselected package netdata-plugin-systemd-journal. Preparing to unpack .../12-netdata-plugin-systemd-journal_1.44.0-77-nightly_amd64.deb ... Unpacking netdata-plugin-systemd-journal (1.44.0-77-nightly) ... Setting up netdata-plugin-nfacct (1.44.0-77-nightly) ... Setting up netdata (1.44.0-77-nightly) ... Setting up netdata-plugin-pythond (1.44.0-77-nightly) ... Setting up netdata-plugin-systemd-journal (1.44.0-77-nightly) ... Setting up netdata-plugin-debugfs (1.44.0-77-nightly) ... Setting up netdata-ebpf-code-legacy (1.44.0-77-nightly) ... Setting up netdata-plugin-perf (1.44.0-77-nightly) ... Setting up netdata-plugin-chartsd (1.44.0-77-nightly) ... Setting up netdata-plugin-ebpf (1.44.0-77-nightly) ... Setting up netdata-plugin-apps (1.44.0-77-nightly) ... Setting up netdata-plugin-logs-management (1.44.0-77-nightly) ... Setting up netdata-plugin-go (1.44.0-77-nightly) ... Setting up netdata-plugin-slabinfo (1.44.0-77-nightly) ... Processing triggers for systemd (245.4-4ubuntu3.22) ... ---------- ID: tailscale-serve Function: cmd.run Name: tailscale serve --bg --https 8443 --set-path /netdata 19999 Result: True Comment: Command &#34;tailscale serve --bg --https 8443 --set-path /netdata 19999&#34; run Started: 22:59:16.060397 Duration: 62.624 ms Changes: &#39; # [tl! collapse:end] ---------- pid: 170328 retcode: 0 stderr: stdout: Available within your tailnet: # [tl! focus:start] https://minion-name.tailnet-name.ts.net:8443/netdata |-- proxy http://127.0.0.1:19999 Serve started and running in the background. To disable the proxy, run: tailscale serve --https=8443 off Summary for minion-name ------------ Succeeded: 4 (changed=2) # [tl! highlight] Failed: 0 ------------ Total states run: 4 Total run time: 15.298 s # [tl! .nocopy:end focus:end] `,description:"A hasty Salt state to deploy netdata monitoring and publish it internally on my tailnet with Tailscale Serve",url:"https://runtimeterror.dev/salt-state-netdata-tailscale/"},"https://runtimeterror.dev/tailscale-ssh-serve-funnel/":{title:"Tailscale Feature Highlight: SSH, Serve, and Funnel",tags:["homelab","networking","proxmox","tailscale"],content:`I've spent the past two years in love with Tailscale↗, which builds on the secure and high-performance Wireguard VPN protocol and makes it really easy to configure and manage. Being able to easily (and securely) access remote devices as if they were on the same LAN is pretty awesome to begin with, but Tailscale is packed with an ever-expanding set of features that can really help to streamline your operations too. Here are three of my favorites.
Tailscale SSH Tailscale already takes care of issuing, rotating, and otherwise managing the Wireguard keys used for securing communications between the systems in your tailnet. Tailscale SSH↗ lets it do the same for your SSH keys as well. No more manually dropping public keys on systems you're setting up for remote access. No more scrambling to figure out how to get your private key onto your mobile device so you can SSH to a server. No more worrying about who has access to what. Tailscale can solve all those concerns for you - and it does it without impacting traditional SSH operations:
With Tailscale, you can already connect machines in your network, and encrypt communications end-to-end from one point to another—and this includes, for example, SSHing from your work laptop to your work desktop. Tailscale also knows your identity, since that’s how you connected to your tailnet. When you enable Tailscale SSH, Tailscale claims port 22 for the Tailscale IP address (that is, only for traffic coming from your tailnet) on the devices for which you have enabled Tailscale SSH. This routes SSH traffic for the device from the Tailscale network to an SSH server run by Tailscale, instead of your standard SSH server. With Tailscale SSH, based on the ACLs in your tailnet, you can allow devices to connect over SSH and rely on Tailscale for authentication instead of public key authentication.
All you need to advertise Tailscale SSH on a system is to pass the appropriate flag in the tailscale up command:
sudo tailscale up --ssh # [tl! .cmd] To actually use the feature, though, you'll need to make sure that your Tailscale ACL permits access. The default &quot;allow all&quot; ACL does this:
{ &#34;acls&#34;: [ // Allow all connections. { &#34;action&#34;: &#34;accept&#34;, &#34;src&#34;: [&#34;*&#34;], &#34;dst&#34;: [&#34;*:*&#34;] }, ], &#34;ssh&#34;: [ // [tl! highlight:start] // Allow all users to SSH into their own devices in check mode. { &#34;action&#34;: &#34;check&#34;, &#34;src&#34;: [&#34;autogroup:member&#34;], &#34;dst&#34;: [&#34;autogroup:self&#34;], &#34;users&#34;: [&#34;autogroup:nonroot&#34;, &#34;root&#34;] } ] // [tl! highlight:end] } The acls block allows all nodes to talk to each other over all ports, and the ssh block will allow all users in the tailnet to SSH into systems they own (both as nonroot users as well as the root account) but only if they have reauthenticated to Tailscale within the last 12 hours (due to that check action).
Most of my tailnet nodes are tagged with a location (internal/external) instead of belonging to a single user, and the check action doesn't work when the source is a tagged system. So my SSH ACL looks a bit more like this:
{ &#34;acls&#34;: [ { // internal systems can SSH to internal and external systems &#34;action&#34;: &#34;accept&#34;, &#34;users&#34;: [&#34;tag:internal&#34;], &#34;ports&#34;: [ &#34;tag:internal:22&#34;, &#34;tag:external:22&#34; ], }, ], &#34;tagOwners&#34;: { // [tl! collapse:3] &#34;tag:external&#34;: [&#34;group:admins&#34;], &#34;tag:internal&#34;: [&#34;group:admins&#34;], }, &#34;ssh&#34;: [ { // users can SSH to their own systems and those tagged as internal and external &#34;action&#34;: &#34;check&#34;, &#34;src&#34;: [&#34;autogroup:members&#34;], &#34;dst&#34;: [&#34;autogroup:self&#34;, &#34;tag:internal&#34;, &#34;tag:external&#34;], &#34;users&#34;: [&#34;autogroup:nonroot&#34;, &#34;root&#34;], }, { // internal systems can SSH to internal and external systems, // but external systems can&#39;t SSH at all &#34;action&#34;: &#34;accept&#34;, &#34;src&#34;: [&#34;tag:internal&#34;], &#34;dst&#34;: [&#34;tag:internal&#34;, &#34;tag:external&#34;], &#34;users&#34;: [&#34;autogroup:nonroot&#34;], }, ], } This way, SSH connections originating from internal systems will be accepted, while those originating from untagged systems1 will have the extra check for tailnet authentication. You might also note that this policy prevents connections from tagged systems as the root user, requiring instead that the user log in with their own account and then escalate as needed.
These ACLs can get pretty granular↗, and I think it's pretty cool to be able to codify your SSH access rules in a centrally-managed2 policy instead of having to manually keep track of which keys are on which systems.
Once SSH is enabled on a tailnet node and the ACL rules are in place, you can SSH from a Tailscale-protected system to another as easily as ssh [hostname] and you'll be connected right away - no worrying about keys or fumbling to enter credentials. I think this is doubly cool when implemented on systems running in The Cloud; Tailscale provides the connectivity so I don't need to open up port 22 to the world.
ssh tsdemo # [tl! .cmd] Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 6.5.11-6-pve x86_64) # [tl! .nocopy:start] * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage Last login: Tue Dec 19 04:17:15 UTC 2023 from 100.73.92.61 on pts/3 john@tsdemo:~$ # [tl! .nocopy:end] As a bonus, I can also open an SSH session from the Tailscale admin console↗3: That even works from mobile devices, too!
Tailscale Serve I've mentioned in the past how impressed I was (and still am!) by the Caddy webserver↗ and how effortless it makes configuring a reverse proxy with automatic TLS. I've used it for a lot of my externally-facing projects.
Caddy is great, but it's not quite as easy to use for internal stuff - I'd need a public DNS record and inbound HTTP access in order for the ACME challenge to complete and a cert to be issued and installed, or I would have to manually create a certificate and load it in the Caddy config. That's probably not a great fit for wanting to proxy my Proxmox host. And that is where the capabilities of Tailscale Serve↗ really come in handy.
Tailscale Serve is a feature that allows you to route traffic from other devices on your Tailscale network (known as a tailnet) to a local service running on your device. You can think of this as sharing the service, such as a website, with the rest of your tailnet.
Prerequisites
Tailscale Serve requires that the MagicDNS and HTTPS features be enabled on your Tailnet. You can learn how to turn those on here↗.
The general syntax is:
tailscale serve &lt;target&gt; &lt;target&gt; can be a file, directory, text, or most commonly the location to a service running on the local machine. The location to the location service can be expressed as a port number (e.g., 3000), a partial URL (e.g., localhost:3000), or a full URL including a path (e.g., http://localhost:3000/foo\`).
The command also supports some useful flags:
--bg, --bg=false Run the command as a background process (default false) --http uint Expose an HTTP server at the specified port --https uint Expose an HTTPS server at the specified port (default mode) --set-path string Appends the specified path to the base URL for accessing the underlying service --tcp uint Expose a TCP forwarder to forward raw TCP packets at the specified port --tls-terminated-tcp uint Expose a TCP forwarder to forward TLS-terminated TCP packets at the specified port --yes, --yes=false Update without interactive prompts (default false) Tailscale Serve can be used for spawning a simple file server (like this one which shares the contents of the /demo directory):
sudo tailscale serve /demo # [tl! .cmd] Available within your tailnet: # [tl! .nocopy:5] https://tsdemo.tailnet-name.ts.net/ |-- path /demo Press Ctrl+C to exit. Note that this server is running in the foreground, and that it's serving the site with an automatically-generated automatically-trusted Let's Encrypt↗ certificate.
I can also use Tailscale Serve for proxying another web server, like Cockpit↗, which runs on http://localhost:9090:
sudo tailscale serve --bg 9090 # [tl! .cmd] Available within your tailnet: # [tl! .nocopy:6] https://tsdemo.tailnet-name.ts.net/ |-- proxy http://127.0.0.1:9090 Serve started and running in the background. To disable the proxy, run: tailscale serve --https=443 off This time, I included the --bg flag so that the server would run in the background, and I told it to proxy port 9090 instead of a file path.
But what if I want to proxy another service (like netdata↗, which runs on http://localhost:19999) at the same time? I can either proxy it on another port, like 8443:
sudo tailscale serve --bg --https 8443 19999 # [tl! .cmd] Available within your tailnet: # [tl! .nocopy:9] https://tsdemo.tailnet-name.ts.net/ |-- proxy http://127.0.0.1:9090 https://tsdemo.tailnet-name.ts.net:8443/ |-- proxy http://127.0.0.1:19999 Serve started and running in the background. To disable the proxy, run: tailscale serve --https=8443 off Or serve it at a different path:
sudo tailscale serve --bg --set-path /netdata 19999 # [tl! .cmd] Available within your tailnet: # [tl! .nocopy:9] https://tsdemo.tailnet-name.ts.net/ |-- proxy http://127.0.0.1:9090 https://tsdemo.tailnet-name.ts.net/netdata |-- proxy http://127.0.0.1:19999 Serve started and running in the background. To disable the proxy, run: tailscale serve --https=443 off Stubborn Apps
Not all web apps adapt well to being served at a different path than they expect. It works fine for netadata, but did not work with Cockpit (at least not without digging deeper into the configuration to change the base URL). But hey, that's why we've got options!
Tailscale Funnel So Tailscale Serve works well for making web resources available inside of my tailnet... but what if I want to share them externally? Tailscale Funnel↗ has that use case covered, and it makes it easy to share things without having to manage DNS records, punch holes in firewalls, or configure certificates.
Tailscale Funnel is a feature that allows you to route traffic from the wider internet to a local service running on a machine in your Tailscale network (known as a tailnet). You can think of this as publicly sharing a local service, like a web app, for anyone to access—even if they don’t have Tailscale themselves.
In addition to requiring that HTTPS is enabled, Funnel also requires that nodes have the funnel attribute applied through the ACL policy.
There's a default policy snippet to apply this:
&#34;nodeAttrs&#34;: [ { &#34;target&#34;: [&#34;autogroup:member&#34;], &#34;attr&#34;: [&#34;funnel&#34;], }, ], But I use a tag to manage funnel privileges instead so my configuration will look something like this:
{ &#34;acls&#34;: [ // [tl! collapse:start] { // internal systems can SSH to internal and external systems &#34;action&#34;: &#34;accept&#34;, &#34;users&#34;: [&#34;tag:internal&#34;], &#34;ports&#34;: [ &#34;tag:internal:22&#34;, &#34;tag:external:22&#34; ], }, ], // [tl! collapse:end] &#34;tagOwners&#34;: { &#34;tag:external&#34;: [&#34;group:admins&#34;], &#34;tag:funnel&#34;: [&#34;group:admins&#34;], // [tl! focus] &#34;tag:internal&#34;: [&#34;group:admins&#34;], }, &#34;ssh&#34;: [ // [tl! collapse:start] { // users can SSH to their own systems and those tagged as internal and external &#34;action&#34;: &#34;check&#34;, &#34;src&#34;: [&#34;autogroup:members&#34;], &#34;dst&#34;: [&#34;autogroup:self&#34;, &#34;tag:internal&#34;, &#34;tag:external&#34;], &#34;users&#34;: [&#34;autogroup:nonroot&#34;, &#34;root&#34;], }, { // internal systems can SSH to internal and external systems, // but external systems can&#39;t SSH at all &#34;action&#34;: &#34;accept&#34;, &#34;src&#34;: [&#34;tag:internal&#34;], &#34;dst&#34;: [&#34;tag:internal&#34;, &#34;tag:external&#34;], &#34;users&#34;: [&#34;autogroup:nonroot&#34;], }, ], // [tl! collapse:end] &#34;nodeAttrs&#34;: [ // [tl! focus:start] { // devices with the funnel tag can enable Tailscale Funnel &#34;target&#34;: [&#34;tag:funnel&#34;], &#34;attr&#34;: [&#34;funnel&#34;], }, ] // [tl! focus:end] } Now only nodes with the funnel tag will be able to enable Funnel.
From there, the process to activate Tailscale Funnel is basically identical to that of Tailscale Serve - you just use tailscale funnel instead of tailscale serve.
Funnel Ports, Not Resources
A Funnel configuration is applied to the port that Tailscale Serve uses to make a resource available, not the resource itself. In the example above, I have both Cockpit and netdata being served over port 443. If I try to use sudo tailscale funnel --set-path /netdata 19999 to Funnel just the netdata instance, that will actually Funnel both resources instead of just the one.
If I want to make the netdata instance available publicly while keeping Cockpit internal-only, I'll need to serve netdata on a different port. Funnel only supports↗ ports 443, 8443, and 10000, so I'll use 8443:
sudo tailscale funnel --bg --https 8443 --set-path /netdata 19999 # [tl! .cmd] Available on the internet: # [tl! .nocopy:6] https://tsdemo.tailnet-name.ts.net:8443/netdata |-- proxy http://127.0.0.1:19999 Funnel started and running in the background. To disable the proxy, run: tailscale funnel --https=8443 off It will take 10 or so minutes for the public DNS record to get created, but after that anyone on the internet (not just within my tailnet!) would be able to access the resource I've shared.
I can use tailscale serve status to confirm that both Cockpit and netdata are served internally on port 443, but only netdata is published externally on port 8443:
sudo tailscale serve status # [tl! .cmd] # [tl! .nocopy:9] # Funnel on: # - https://tsdemo.tailnet-name.ts.net:8443 https://tsdemo.tailnet-name.ts.net (tailnet only) |-- / proxy http://127.0.0.1:9090 |-- /netdata proxy http://127.0.0.1:19999 https://tsdemo.tailnet-name.ts.net:8443 (Funnel on) |-- /netdata proxy http://127.0.0.1:19999 Pretty cool, right?
Other Features This post has covered some of my favorite (and most-used) Tailscale features, but there are plenty of other cool tricks up Tailscale's sleeves. These are some others that are definitely worth checking out:
Mullvad Exit Nodes↗ Taildrop↗ golink↗ (also covered here) And if you're particularly nerdy like me, these will probably also grab your interest:
GitOps for Tailscale ACLs↗ Manage Tailscale resources using Terraform↗ Need more inspiration? Tailscale has a pretty thorough collection of example uses cases↗ to help get the sparks flying.
I'm sure I'll have more Tailscale-centric posts to share in the future, too. Stay tuned!
Or the Tailscale admin web console - as we'll soon see.&#160;&#x21a9;&#xfe0e;
And potentially version-controlled↗.&#160;&#x21a9;&#xfe0e;
SSH connections originating from the admin portal are associated with that logon, so they will follow the check portion of the policy. The first attempt to connect will require reauthentication with Tailscale, and subsequent connections will auto-connect for the next 12 hours.&#160;&#x21a9;&#xfe0e;
`,description:"Exploring some of my favorite Tailscale addon features: SSH, Serve, and Funnel.",url:"https://runtimeterror.dev/tailscale-ssh-serve-funnel/"},"https://runtimeterror.dev/automating-camera-notifications-home-assistant-ntfy/":{title:"Automating Security Camera Notifications With Home Assistant and Ntfy",tags:["api","automation","homeassistant"],content:`A couple of months ago, I wrote about how I was using a self-hosted instance of ntfy↗ to help streamline notification pushes from a variety of sources. I closed that post with a quick look at how I had integrated ntfy into my Home Assistant setup for some basic notifications.
I've now used that immense power to enhance the notifications I get from the Reolink security cameras↗ scattered around my house. I selected Reolink cameras specifically because I knew it was supported by Home Assistant, and for the on-device animal/person/vehicle detection which allowed for a bit of extra control over which types of motion events would trigger a notification or other action. I've been very happy with this choice, but I have found that the Reolink app itself can be a bit clunky:
The app lets you send notifications on a schedule (I only want notifications from the indoor cameras during work hours when no one is home), but doesn't make it easy to override that schedule (like when it's a holiday and we're all at home anyway). Push notifications don't include an image capture so when I receive a notification about a person in my backyard I have to open the app, go select the correct camera, select the Playback option, and scrub back and forth until I see whatever my camera saw. I figured I could combine the excellent Reolink integration for Home Assistant↗ with Home Assistant's powerful Automation platform and ntfy to get more informative notifications and more flexible alert schedules. Here's the route I took.
Alert on motion detection Ntfy Integration
Since manually configuring ntfy in Home Assistant via the RESTful Notifications integration, I found that a ntfy-specific integration↗ was available through the Home Assistant Community Store↗ addon. That setup is a bit more flexible so I've switched my setup to use it instead:
# configuration.yaml notify: - name: ntfy platform: rest # [tl! --:8 collapse:8] method: POST_JSON headers: Authorization: !secret ntfy_token data: topic: home_assistant title_param_name: title message_param_name: message resource: ! secret ntfy_url platform: ntfy # [tl! ++:3] url: !secret ntfy_url token: !secret ntfy_token topic: home_assistant The Reolink integration exposes a number of entities for each camera. For triggering a notification on motion detection, I'll be interested in the binary sensor↗ entities named like binary_sensor.$location_$type (like binary_sensor.backyard_person and binary_sensor.driveway_vehicle), the state of which will transition from off to on when the selected motion type is detected.
So I'll begin by crafting a simple automation which will push out a notification whenever any of the listed cameras detect a person or vehicle1:
# torchlight! {&#34;lineNumbers&#34;: true} # exterior_motion.yaml alias: Exterior Motion Alerts description: &#34;&#34; trigger: - platform: state entity_id: - binary_sensor.backyard_person - binary_sensor.driveway_person - binary_sensor.driveway_vehicle - binary_sensor.east_side_front_person - binary_sensor.east_side_rear_person - binary_sensor.west_side_person from: &#34;off&#34; to: &#34;on&#34; condition: [] action: - service: notify.ntfy data: title: Motion detected! message: &#34;{{ trigger.to_state.attributes.friendly_name }}&#34; Templating
That last line is taking advantage of Jinja templating and trigger variables↗ so that the resulting notification displays the friendly name of whichever binary_sensor triggered the automation run. This way, I'll see something like &quot;Backyard Person&quot; instead of the entity ID listed earlier.
I'll step outside and see if it works... Capture a snapshot Each Reolink camera also exposes a camera.$location_sub entity which represents the video stream from the connected camera. I can add another action to the notification so that it will grab a snapshot, but I'll also need a way to match the camera entity to the correct binary_sensor entity. I can do that by adding a variable set to the bottom of the automation:
# torchlight! {&#34;lineNumbers&#34;: true} # exterior_motion.yaml [tl! focus] alias: Exterior Motion Alerts description: &#34;&#34; trigger: # [tl! collapse:start] - platform: state entity_id: - binary_sensor.backyard_person - binary_sensor.driveway_person - binary_sensor.driveway_vehicle - binary_sensor.east_side_front_person - binary_sensor.east_side_rear_person - binary_sensor.west_side_person from: &#34;off&#34; to: &#34;on&#34; # [tl! collapse:end] condition: [] action: - service: camera.snapshot # [tl! ++:start focus:start] target: entity_id: &#34;{{ cameras[trigger.to_state.entity_id] }}&#34; data: filename: /media/snaps/motion.jpg # [tl! ++:end focus:end] - service: notify.ntfy data: title: Motion detected! message: &#34;{{ trigger.to_state.attributes.friendly_name }}&#34; variables: # [tl! ++:start focus:start] cameras: binary_sensor.backyard_person: camera.backyard_sub binary_sensor.driveway_person: camera.driveway_sub binary_sensor.driveway_vehicle: camera.driveway_sub binary_sensor.east_side_front_person: camera.east_side_front_sub binary_sensor.east_side_rear_person: camera.east_side_rear_sub binary_sensor.west_side_person: camera.west_side_sub # [tl! ++:end focus:end] That &quot;{{ cameras[trigger.to_state.entity_id] }}&quot; template will look up the ID of the triggering binary_sensor and return the appropriate camera entity, and that will use the camera.snapshot service↗ to save a snapshot to the desginated location (/media/snaps/motion.jpg).
Before this will actually work, though, I need to reconfigure Home Assistant to allow access to the storage location, and I should also go ahead and pre-create the folder so there aren't any access issues.
# configuration.yaml homeassistant: allowlist_external_dirs: - &#34;/media/snaps/&#34; I'm using the Home Assistant Operating System virtual appliance↗, so /media is already symlinked to /root/media inside the Home Assistant installation directory. So I'll just log into that shell and create the snaps subdirectory:
mkdir -p /media/snaps # [tl! .cmd_root] Rather than walking outside each time I want to test this, I'll just use the Home Assistant Developer Tools to manually toggle the state of the binary_sensor.backyard_person entity to on, and I should then be able to see the snapshot in the Media interface: Woo, look at me making progress!
Attach the snapshot Now that I've captured the snap, I need to figure out how to attach it to the notification. Ntfy supports inline image attachments↗, which is handy, but it expects those to be delivered via HTTP PUT action. Neither my original HTTP POST approach or the Ntfy integration support this currently, so I had to use the shell_command integration↗ to make the call directly.
I can't use the handy !secret expansion inside of the shell command, though, so I'll need a workaround to avoid sticking sensitive details directly in my configuration.yaml. I can use a dummy sensor to hold the value, and then use the {{ states('sensor.$sensor_name') }} template to retrieve it.
So here we go:
# configuration.yaml [tl! focus:start] # dummy sensor to make ntfy secrets available to template engine template: - sensor: - name: ntfy_token state: !secret ntfy_token # [tl! highlight] - name: ntfy_url state: !secret ntfy_url # [tl! highlight focus:end] notify: - name: ntfy platform: ntfy url: !secret ntfy_url token: !secret ntfy_token topic: home_assistant # [tl! highlight:10,1] shell_command: # [tl! focus:9 highlight:6,1] ntfy_put: &gt; curl --header &#39;Title: {{ title }}&#39; --header &#39;Priority: {{ priority }}&#39; --header &#39;Filename: {{ filename }}&#39; --header &#39;Authorization: Bearer {{ states(&#39;sensor.ntfy_token&#39;) }}&#39; --upload-file &#39;{{ file }}&#39; --header &#39;Message: {{ message }}&#39; --url &#39;{{ states(&#39;sensor.ntfy_url&#39;) }}/home_assistant&#39; Now I just need to replace the service call in the automation with the new shell_command.ntfy_put one:
# torchlight! {&#34;lineNumbers&#34;: true} # exterior_motion.yaml # [tl! focus] alias: Exterior Motion Alerts description: &#34;&#34; trigger: # [tl! collapse:start] - platform: state entity_id: - binary_sensor.backyard_person - binary_sensor.driveway_person - binary_sensor.driveway_vehicle - binary_sensor.east_side_front_person - binary_sensor.east_side_rear_person - binary_sensor.west_side_person from: &#34;off&#34; to: &#34;on&#34; # [tl! collapse:end] condition: [] action: - service: camera.snapshot target: entity_id: &#34;{{ cameras[trigger.to_state.entity_id] }}&#34; data: filename: /media/snaps/motion.jpg - service: notify.ntfy # [tl! --:start focus:start] data: title: Motion detected! message: &#34;{{ trigger.to_state.attributes.friendly_name }}&#34; # [tl! --:end] - service: shell_command.ntfy_put # [tl! ++:start reindex(-4)] data: title: Motion detected! message: &#34;{{ trigger.to_state.attributes.friendly_name }}&#34; file: /media/snaps/motion.jpg # [tl! ++:end focus:end] variables: # [tl! collapse:start] cameras: binary_sensor.backyard_person: camera.backyard_sub binary_sensor.driveway_person: camera.driveway_sub binary_sensor.driveway_vehicle: camera.driveway_sub binary_sensor.east_side_front_person: camera.east_side_front_sub binary_sensor.east_side_rear_person: camera.east_side_rear_sub binary_sensor.west_side_person: camera.west_side_sub # [tl! collapse:end] Now when I wander outside... Well that guy seems sus - but hey, it worked!
Backoff rate limit Of course, I'll also continue to get notified about that creeper in the backyard about every 15-20 seconds or so. That's not quite what I want. The easy way to prevent an automation from firing constantly would be to insert a delay↗ action, but that would be a global delay rather than per-camera. I don't necessarily need to know every time the weirdo in the backyard moves, but I would like to know if he moves around to the side yard or driveway. So I needed something more flexible than an automation-wide delay.
Instead, I'll create a 5-minute timer↗ for each camera by simply adding this to my configuration.yaml:
# configuration.yaml timer: backyard_person: duration: &#34;00:05:00&#34; driveway_person: duration: &#34;00:05:00&#34; driveway_vehicle: duration: &#34;00:05:00&#34; east_front_person: duration: &#34;00:05:00&#34; east_rear_person: duration: &#34;00:05:00&#34; west_person: duration: &#34;00:05:00&#34; Back in the automation, I'll add a new timers variable set which will help to map the binary_sensor to the corresponding timer object. I can then append an action to start the timer, and a condition so that the automation will only fire if the timer for a given camera is not currently running. I'll also set the automation's mode to single (so that it will only run once at a time), and set the max_exceeded value to silent (so that multiple triggers won't raise any errors).
# torchlight! {&#34;lineNumbers&#34;: true} # exterior_motion.yaml # [tl! focus] alias: Exterior Motion Alerts description: &#34;&#34; trigger: # [tl! collapse:start] - platform: state entity_id: - binary_sensor.backyard_person - binary_sensor.driveway_person - binary_sensor.driveway_vehicle - binary_sensor.east_side_front_person - binary_sensor.east_side_rear_person - binary_sensor.west_side_person from: &#34;off&#34; to: &#34;on&#34; # [tl! collapse:end] condition: [] # [tl! focus:3 --] condition: # [tl! ++:2 reindex(-1)] - condition: template value_template: &#34;{{ is_state(timers[trigger.to_state.entity_id], &#39;idle&#39;) }}&#34; action: - service: camera.snapshot target: entity_id: &#34;{{ cameras[trigger.to_state.entity_id] }}&#34; data: filename: /media/snaps/motion.jpg - service: notify.ntfy data: title: Motion detected! message: &#34;{{ trigger.to_state.attributes.friendly_name }}&#34; - service: shell_command.ntfy_put data: title: Motion detected! message: &#34;{{ trigger.to_state.attributes.friendly_name }}&#34; file: /media/snaps/motion.jpg - service: timer.start # [tl! focus:2 ++:2] target: entity_id: &#34;{{ timers[trigger.to_state.entity_id] }}&#34; mode: single # [tl! focus:1 ++:1] max_exceeded: silent variables: cameras: # [tl! collapse:start] binary_sensor.backyard_person: camera.backyard_sub binary_sensor.driveway_person: camera.driveway_sub binary_sensor.driveway_vehicle: camera.driveway_sub binary_sensor.east_side_front_person: camera.east_side_front_sub binary_sensor.east_side_rear_person: camera.east_side_rear_sub binary_sensor.west_side_person: camera.west_side_sub # [tl! collapse:end] timers: # [tl! ++:start focus:start] binary_sensor.backyard_person: timer.backyard_person binary_sensor.driveway_person: timer.driveway_person binary_sensor.driveway_vehicle: timer.driveway_vehicle binary_sensor.east_side_front_person: timer.east_front_person binary_sensor.east_side_rear_person: timer.east_rear_person binary_sensor.west_side_person: timer.west_person# [tl! ++:end focus:end] That pretty much takes care of my needs for exterior motion alerts, and should keep me informed if someone is poking around my house (or, more frequently, making a delivery).
Managing interior alerts I've got a few interior cameras which I'd like to monitor too, so I'll start by just copying the exterior automation and updating the entity IDs:
# torchlight! {&#34;lineNumbers&#34;: true} # interior_motion.yaml alias: Interior Motion Alerts description: &#34;&#34; trigger: - platform: state entity_id: - binary_sensor.kitchen_back_door_person - binary_sensor.garage_person - binary_sensor.garage_vehicle - binary_sensor.study_entryway_person from: &#34;off&#34; to: &#34;on&#34; condition: - condition: template value_template: &#34;{{ is_state(timers[trigger.to_state.entity_id], &#39;idle&#39;) }}&#34; action: - service: camera.snapshot target: entity_id: &#34;{{ cameras[trigger.to_state.entity_id] }}&#34; data: filename: /media/snaps/motion.jpg - service: shell_command.ntfy_put data: title: Motion detected! message: &#34;{{ trigger.to_state.attributes.friendly_name }}&#34; file: /media/snaps/motion.jpg - service: timer.start target: entity_id: &#34;{{ timers[trigger.to_state.entity_id] }}&#34; max_exceeded: silent mode: single variables: cameras: binary_sensor.kitchen_back_door_person: camera.kitchen_back_door_sub binary_sensor.study_entryway_person: camera.study_entryway_sub binary_sensor.garage_person: camera.garage_sub binary_sensor.garage_vehicle: camera.garage_sub timers: binary_sensor.kitchen_back_door_person: timer.kitchen_person binary_sensor.study_entryway_person: timer.study_person binary_sensor.garage_person: timer.garage_person binary_sensor.garage_vehicle: timer.garage_vehicle But I don't typically want to get alerted by these cameras if my wife or I are home and awake. So I'll use the local calendar integration↗ to create a schedule for when the interior cameras should be active. Once that integration is enabled and the entity calendar.interior_camera_schedule created, I can navigate to the Calendar section of my Home Assistant interface to create the recurring calendar events (with the summary &quot;On&quot;). I'll basically be enabling notifications while we're sleeping and while we're at work, but disabling notifications while we're expected to be at home.
So then I'll just add another condition so that the automation will only fire during those calendar events:
# torchlight! {&#34;lineNumbers&#34;: true} # interior_motion.yaml [tl! focus] alias: Interior Motion Alerts description: &#34;&#34; trigger: # [tl! collapse:start] - platform: state entity_id: - binary_sensor.kitchen_back_door_person - binary_sensor.garage_person - binary_sensor.garage_vehicle - binary_sensor.study_entryway_person from: &#34;off&#34; to: &#34;on&#34; # [tl! collapse:end] condition: - condition: template value_template: &#34;{{ is_state(timers[trigger.to_state.entity_id], &#39;idle&#39;) }}&#34; - condition: state # [tl! focus:2 ++:2] entity_id: calendar.interior_camera_schedule state: &#34;on&#34; action: # [tl! collapse:start] - service: camera.snapshot target: entity_id: &#34;{{ cameras[trigger.to_state.entity_id] }}&#34; data: filename: /media/snaps/motion.jpg - service: shell_command.ntfy_put data: title: Motion detected! message: &#34;{{ trigger.to_state.attributes.friendly_name }}&#34; file: /media/snaps/motion.jpg - service: timer.start target: entity_id: &#34;{{ timers[trigger.to_state.entity_id] }}&#34; # [tl! collapse:end] max_exceeded: silent mode: single variables: # [tl! collapse:start] cameras: binary_sensor.kitchen_back_door_person: camera.kitchen_back_door_sub binary_sensor.study_entryway_person: camera.study_entryway_sub binary_sensor.garage_person: camera.garage_sub binary_sensor.garage_vehicle: camera.garage_sub timers: binary_sensor.kitchen_back_door_person: timer.kitchen_person binary_sensor.study_entryway_person: timer.study_person binary_sensor.garage_person: timer.garage_person binary_sensor.garage_vehicle: timer.garage_vehicle # [tl! collapse:end] I'd also like to ensure that the interior motion alerts are also activated whenever our Abode↗ security system is armed, regardless of what time that may be. That will make the condition a little bit trickier: alerts should be pushed if the timer isn't running AND the schedule is active OR the security system is armed (in either &quot;Home&quot; or &quot;Away&quot; mode). So here's what that will look like:
# torchlight! {&#34;lineNumbers&#34;: true} # interior_motion.yaml [tl! focus] alias: Interior Motion Alerts description: &#34;&#34; trigger: # [tl! collapse:start] - platform: state entity_id: - binary_sensor.kitchen_back_door_person - binary_sensor.garage_person - binary_sensor.garage_vehicle - binary_sensor.study_entryway_person from: &#34;off&#34; to: &#34;on&#34; # [tl! collapse:end] condition: # [tl! focus:start] - condition: and # [tl! ++:1] conditions: # [tl! collapse:5] - condition: template # [tl! --:4] value_template: &#34;{{ is_state(timers[trigger.to_state.entity_id], &#39;idle&#39;) }}&#34; - condition: state entity_id: calendar.interior_camera_schedule state: &#34;on&#34; - condition: template # [tl! ++:start reindex(-5)] value_template: &#34;{{ is_state(timers[trigger.to_state.entity_id], &#39;idle&#39;) }}&#34; - condition: or conditions: - condition: state entity_id: calendar.interior_camera_schedule state: &#34;on&#34; - condition: state state: armed_away entity_id: alarm_control_panel.abode_alarm - condition: state state: armed_home entity_id: alarm_control_panel.abode_alarm # [tl! ++:end focus:end] action: # [tl! collapse:start] - service: camera.snapshot target: entity_id: &#34;{{ cameras[trigger.to_state.entity_id] }}&#34; data: filename: /media/snaps/motion.jpg - service: shell_command.ntfy_put data: title: Motion detected! message: &#34;{{ trigger.to_state.attributes.friendly_name }}&#34; file: /media/snaps/motion.jpg - service: timer.start target: entity_id: &#34;{{ timers[trigger.to_state.entity_id] }}&#34; # [tl! collapse:end] max_exceeded: silent mode: single variables: # [tl! collapse:start] cameras: binary_sensor.kitchen_back_door_person: camera.kitchen_back_door_sub binary_sensor.study_entryway_person: camera.study_entryway_sub binary_sensor.garage_person: camera.garage_sub binary_sensor.garage_vehicle: camera.garage_sub timers: binary_sensor.kitchen_back_door_person: timer.kitchen_person binary_sensor.study_entryway_person: timer.study_person binary_sensor.garage_person: timer.garage_person binary_sensor.garage_vehicle: timer.garage_vehicle # [tl! collapse:end] Snooze or disable alerts We've got a lawn service that comes pretty regularly to take care of things, and I don't want to get constant alerts while they're doing things in the yard. Or maybe we stay up a little late one night and don't want to get pinged with interior alerts during that time. So I created a script to snooze all motion alerts for 30 minutes, simply by temporarily disabling the automations I just created:
# torchlight! {&#34;lineNumbers&#34;: true} # snooze_motion_alerts.yaml alias: Snooze Motion Alerts sequence: - service: automation.turn_off data: stop_actions: true target: entity_id: - automation.exterior_motion_alerts - automation.interior_motion_alerts - service: notify.ntfy data: title: Motion Snooze message: Camera motion alerts are disabled for 30 minutes. - delay: hours: 0 minutes: 30 seconds: 0 milliseconds: 0 - service: automation.turn_on data: {} target: entity_id: - automation.interior_motion_alerts - automation.exterior_motion_alerts - service: notify.ntfy data: title: Motion Resume message: Camera motion alerts are resumed. mode: single icon: mdi:alarm-snooze I can then add that script to the camera dashboard in Home Assistant or pin it to the home controls on my Android phone for easy access.
I'll also create another script for manually toggling interior alerts for when we're home at an odd time:
# torchlight! {&#34;lineNumbers&#34;: true} # toggle_interior_alerts.yaml alias: Toggle Indoor Camera Alerts sequence: - service: automation.toggle data: {} target: entity_id: automation.interior_motion_alerts - service: notify.ntfy data: title: &#34;Interior Camera Alerts &#34; message: &#34;Alerts are {{ states(&#39;automation.interior_motion_alerts&#39;) }} &#34; mode: single icon: mdi:cctv That's a wrap This was a fun little project which had me digging a bit deeper into Home Assistant than I had previously ventured, and I'm really happy with how things turned out. I definitely learned a ton in the process. I might explore adding action buttons to the notifications↗ to directly snooze alerts that way, but that will have to wait a bit as I'm out of tinkering time for now.
Hopefully I only need to worry about vehicles in the driveway. Please don't drive through my backyard, thanks.&#160;&#x21a9;&#xfe0e;
`,description:"Using the power of Home Assistant automations and Ntfy push notifications to level-up security camera motion detections.",url:"https://runtimeterror.dev/automating-camera-notifications-home-assistant-ntfy/"},"https://runtimeterror.dev/ditching-vsphere-for-proxmox/":{title:"I Ditched vSphere for Proxmox VE",tags:["homelab","linux","tailscale","proxmox","vmware"],content:`Way back in 2021, I documented how I had built a VMWare-focused home lab on an Intel NUC 9 host. The setup was fairly complicated specifically so I could build and test content for what was then known as vRealize Automation. My priorities have since shifted1, though, and I no longer have need for vRA at my house. vSphere + vCenter carries a hefty amount of overhead, so I thought it might be time to switch my homelab over to something a bit simpler in the form of Proxmox VE↗.
I only really have the one physical host2 so I knew that I wouldn't be able to do any sort of live migration. This move would require a full &quot;production&quot; outage as I converted the host, but I needed to ensure I could export some of my critical workloads off of vSphere and the import them into the new Proxmox VE environment. I run Home Assistant↗ in a VM and it would be Double Plus Bad if I broke my smart home's brain in the process.
It took most of a Sunday afternoon, but I eventually fumbled my way through the hypervisor transformation and got things up and running again in time for my evening HA automations to fire at sunset3. I'm not going to detail the entire process here, but do want to highlight a few of the lessons I learned along the way. A lot of this information comes from Proxmox's Migration of servers to Proxmox VE↗ doc4.
The Plan I've found that my most successful projects begin with a plan5:
Capture details like IPs and network configurations to try and maintain some consistency. Export my needed VMs as OVFs, and save these to external storage. Install Proxmox VE. Transfer the OVFs to the new Proxmox host, and then import them. Perform any required post-import cleanup to make things happy. Exporting VMs to OVF My initial plan was to just right-click the VMs in the vSphere UI and export them from there, but I'd forgotten how reliable that process seems to be. A few smaller VMs exported fine, but most would fail after a while.
I switched to using VMware's ovftool↗ to do the exports and that worked better, though I did have to fumble through until I found the right syntax:
~/ovftool/ovftool --noSSLVerify \\ # [tl! .cmd] &#34;vi://\${vsphere_username}@vsphere.local:\${vsphere_password}@\${vsphere_host}/\${vsphere_datacenter}/vm/\${vsphere_folder}/\${vsphere_vm_name}&#34; \\ &#34;\${vsphere_vm_name}.ova&#34; Along the way I also learned that I could search for a VM name with ?dns=\${vsphere_vm_name} and not have to work out the complete path to it:
~/ovftool/ovftool --noSSLVerify \\ # [tl! .cmd] &#34;vi://\${vsphere_username}@vsphere.local:\${vsphere_password}@\${vsphere_host}/?dns=\${vsphere_vm_name}&#34; \\ &#34;\${vsphere_vm_name}.ova&#34; Installing Proxmox VE I had been running ESXi off of small USB drive inserted in the NUC's internal USB port, and that typically worked fine since ESXi runs entirely in-memory after it's booted. Proxox VE doesn't do the ramdisk thing, and I found after an initial test install that the performance when running from USB wasn't great. I dug around in my desk drawers and miscellaneous parts bins and found the 512GB M.2 2230 NVMe drive that originally came in my Steam Deck (before I upgraded it to a 2TB drive from Framework↗). I installed that (and a 2230-&gt;2242 adapter↗) in the NUC's third (and final) M.2 slot and used that as Proxmox VE's system drive.
Performance was a lot better that way!
After Proxmox VE booted, I went ahead and wiped the two 1TB NVMe drives (which had been my VMFS datastore) to get them ready for use. Without the heavyweight vCenter and vRA VMs, I wouldn't be needing as much storage so I elected to add these drives to a RAID1 ZFS pool (named zeefs) to get a bit of extra resiliency.
Importing VMs Once Proxmox VE was up and (marginally) configured, I could start the process of restoring my workloads, beginning with the all-important Home Assistant VM.
I used scp to transfer it to the Proxmox VE host:
scp hassos.ova root@\${proxmox_host}:/tmp/ #[tl! .cmd] On the host, I needed to first extract the OVA archive so I could get at the OVF and VMDK files inside:
cd /tmp # [tl! .cmd_root:2] tar xf hassos.ova I could then use the qm command↗ to import the OVF. The syntax is:
qm importovf \${vm_id} \${ovf_filename} \${vm_storage} I'll assign this VM ID number 100 and will deploy it on the zeefs storage:
qm importovf 100 hassos.ovf zeefs # [tl! .cmd_root] Booting imported VMs (or trying to) Once the import completed, I went to the Proxmox VE UI to see how things looked. The imported VM was missing a network interface, so I added a new virtio one. Everything else looked okay so I hit the friendly Start button and popped to the Console view to keep an eye on things.
Unfortunately, the boot hung with a message about not being able to find the OS. In this case, that's because the imported VM defaulted to a traditional BIOS firmware while the installed guest OS is configured for use with UEFI firmware. That's easy enough to change: Though it does warn that I'll need to manually add an EFI disk for storing the configuration, so I'll do that as well: That allowed me to boot the VM successfully, and my Home Assistant system came online in its new home without much more fuss. I could then move forward with importing the remaining workloads. Some needed to have their network settings reconfigured, while some preserved them (I guess it depends on how the guest was addressing the NIC).
In hindsight, I should have also recorded information about the firmware configuration of my VMs. Some used BIOS, some used EUFI, and I didn't really know which way a particular VM was going to lean until I tried booting it with the default SeaBIOS.
Tailscale I'm a big fan of Tailscale↗, and have been using it to make secure networking simple for a little while now. Naturally, I wanted to see how I could leverage it with this new setup.
On the host While ESXi is a locked-down hypervisor which runs off a ramdisk (making installing arbitrary software kind of tricky/messy/dangerous), Proxmox VE is a customized Debian install with a bunch of VM-management tools built in. This means (among other things) that I can easily install Tailscale↗ and use that for securely accessing my Proxmox VE server remotely.
Installing Tailscale on a Proxmox VE host is basically the same as on any other Debian-based Linux OS:
curl -fsSL https://tailscale.com/install.sh | sh # [tl! .cmd_root] I can then use tailscale up to start the process of logging in to my Tailscale account, and I throw in the --ssh flag to configure Tailscale SSH↗:
tailscale up --ssh # [tl! .cmd_root] Once I'm logged in, I'll also use Tailscale Serve↗ as a reverse-proxy for Proxmox VE's web management interface:
tailscale serve --bg https+insecure://localhost:8006 # [tl! .cmd_root] That takes a few minutes for the MagicDNS record and automatically-provisioned TLS certificate to go live, but I can then access my environment by going to https://prox1.my-tailnet.ts.net/, and my browser won't throw any warnings about untrusted certs. Very cool.
In LXC containers One of the other slick features of Proxmox VE is the ability to run lightweight LXC system containers↗ right on the hypervisor. These are unprivileged containers by default, though, so attempting to bring Tailscale online fails:
tailscale up # [tl! .cmd_root focus:3] failed to connect to local tailscaled; it doesn’t appear to be running (sudo systemctl start tailscaled ?) # [tl! .nocopy] systemctl start tailscaled # [tl! .cmd_root:1] systemctl status tailscaled x tailscaled.service - Tailscale node agent # [tl! .nocopy:2] Loaded: loaded (/lib/systemd/system/tailscaled.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Fri 2023-11-24 20:36:25 UTC; 463ms ago # [tl! focus highlight] Fortunately, Tailscale has a doc↗ for working with unprivileged LXC containers. I just need to edit the CT's config file on the proxmox host:
vim /etc/pve/lxc/\${ID}.conf # [tl! .cmd_root] And add these two lines at the bottom:
# torchlight! {&#34;lineNumbers&#34;: true} arch: amd64 cores: 2 features: nesting=1 hostname: gitlab memory: 2048 nameserver: 192.168.1.1 net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=B6:DE:F0:8A:5C:5C,ip=dhcp,type=veth ostype: debian rootfs: zeefs:subvol-104-disk-0,size=60G swap: 512 unprivileged: 1 lxc.cgroup2.devices.allow: c 10:200 rwm # [tl! focus:1 ++:1] lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file After stopping and restarting the container, Tailscale works just brilliantly!
Closing thoughts There were a few hiccups along the way, but I'm overall very happy with both the decision to ditch ESXi as well as how the migration to Proxmox VE went. I've been focused on VMware for so long, and hadn't really kept up with the other options. It's past time for me to broaden my virtualization horizons a bit.
Proxmox VE makes a lot more sense for a homelab setup, and I'm looking forward to learning more about it as I build more stuff on it.
And, if I'm being entirely honest, I'm a little uncertain of what the future may hold for VMware now that the Broadcom acquisition has closed↗.&#160;&#x21a9;&#xfe0e;
Sure, I've still got ESXi-ARM running on a Quartz64 single-board computer but that doesn't really count.&#160;&#x21a9;&#xfe0e;
There's nothing like starting a project with a strict deadline!.&#160;&#x21a9;&#xfe0e;
I realized in hindsight that I probably could have used the Server self-migration↗ steps to import VMs straight off of the VMFS datastore but (1) I wasn't sure if/how that would work with a single datastore stretched across multiple disks and (2) I didn't think of it until after I had already imported from OVF.&#160;&#x21a9;&#xfe0e;
The plan usually doesn't survive for very long once the project starts, but establishing a plan is an important part of the project ritual.&#160;&#x21a9;&#xfe0e;
`,description:"I moved my homelab from VMware vSphere to Proxmox VE, and my only regret is that I didn't make this change sooner.",url:"https://runtimeterror.dev/ditching-vsphere-for-proxmox/"},"https://runtimeterror.dev/spotlight-on-torchlight/":{title:"Spotlight on Torchlight",tags:["javascript","hugo","meta"],content:`I've been futzing around a bit with how code blocks render on this blog. Hugo has a built-in, really fast, syntax highlighter↗ courtesy of Chroma↗. Chroma is basically automatic and it renders very quickly1 during the hugo build process, and it's a pretty solid &quot;works everywhere out of the box&quot; option.
That said, the one-size-fits-all approach may not actually fit everyone well, and Chroma does leave me wanting a bit more. Chroma sometimes struggles with tokenizing and highlighting certain languages, leaving me with boring monochromatic text blocks. Hugo's implementation supports highlighting individual lines by inserting directives next to the code fence backticks (like {hl_lines=&quot;11-13&quot;} to highlight lines 11-13), but that can be clumsy if you're not sure which lines need to be highlighted2, are needing to highlight multiple disjointed lines, or later insert additional lines which throw off the count. And sometimes I'd like to share a full file for context while also collapsing it down to just the bits I'm going to write about. That's not something that can be done with the built-in highlighter (at least not without tacking on a bunch of extra JavaScript and CSS nonsense3).
But then I found a post from Sebastian de Deyne about Better code highlighting in Hugo with Torchlight↗. and I thought that Torchlight↗ sounded pretty promising.
From Torchlight's docs↗,
Torchlight is a VS Code-compatible syntax highlighter that requires no JavaScript, supports every language, every VS Code theme, line highlighting, git diffing, and more.
Unlike traditional syntax highlighting tools, Torchlight is an HTTP API that tokenizes and highlights your code on our backend server instead of in the visitor's browser.
We find this to be the easiest and most powerful way to achieve accurate and feature rich syntax highlighting.
Client-side language parsers are limited in their complexity since they have to run in the browser environment. There are a lot of edge cases that those libraries can't catch.
Torchlight relies on the VS Code parsing engine and TextMate language grammars to achieve the most accurate results possible. We bring the power of the entire VS Code ecosystem to your docs or blog.
In short: Code blocks in, formatted HTML out, and no JavaScript or extra code to render this slick display in the browser:
# torchlight! {&#34;lineNumbers&#34;: true} # netlify.toml [build] publish = &#34;public&#34; [build.environment] HUGO_VERSION = &#34;0.111.3&#34; # [tl! --] HUGO_VERSION = &#34;0.116.1&#34; # [tl! ++ reindex(-1)] [context.production] # [tl! focus:5 highlight:3,1] command = &#34;&#34;&#34; hugo --minify npm i @torchlight-api/torchlight-cli npx torchlight &#34;&#34;&#34; [context.preview] # [tl! collapse:start] command = &#34;&#34;&#34; hugo --minify --environment preview npm i @torchlight-api/torchlight-cli npx torchlight &#34;&#34;&#34; [[headers]] for = &#34;/*&#34; [headers.values] X-Robots-Tag = &#34;noindex&#34; [[redirects]] from = &#34;/*&#34; to = &#34;/404/&#34; status = 404 # [tl! collapse:end] Pretty nice, right? That block's got:
Colorful, accurate syntax highlighting Traditional line highlighting A shnazzy blur/focus to really make the important lines pop In-line diffs to show what's changed An expandable section to reveal additional context on-demand And marking-up that code block was pretty easy and intuitive. Torchlight is controlled by annotations↗ inserted as comments appropriate for whatever language you're using (like # [tl! highlight] to highlight a single line). In most cases you can just put the annotation right at the end of the line you're trying to flag. You can also specify ranges↗ relative to the current line ([tl! focus:5] to apply the focus effect to the current line and the next five) or use :start and :end so you don't have to count at all.
# torchlight! {&#34;torchlightAnnotations&#34;: false} # netlify.toml [build] publish = &#34;public&#34; [build.environment] # diff: remove this line HUGO_VERSION = &#34;0.111.3&#34; # [tl! --] # diff: add this line, adjust line numbering to compensate HUGO_VERSION = &#34;0.116.1&#34; # [tl! ++ reindex(-1)] # focus this line and the following 5, highlight the third line down [context.production] # [tl! focus:5 highlight:3,1] command = &#34;&#34;&#34; hugo --minify npm i @torchlight-api/torchlight-cli npx torchlight &#34;&#34;&#34; # collapse everything from \`:start\` to \`:end\` [context.preview] # [tl! collapse:start] command = &#34;&#34;&#34; hugo --minify --environment preview npm i @torchlight-api/torchlight-cli npx torchlight &#34;&#34;&#34; [[headers]] for = &#34;/*&#34; [headers.values] X-Robots-Tag = &#34;noindex&#34; [[redirects]] from = &#34;/*&#34; to = &#34;/404/&#34; status = 404 # [tl! collapse:end] See what I mean? Being able to put the annotations directly on the line(s) they modify is a lot easier to manage than trying to keep track of multiple line numbers in the header. And I think the effect is pretty cool.
Basic setup So what did it take to get this working on my blog?
I started with registering for a free4 account at torchlight.dev↗ and generating an API token. I'll need to include that later with calls to the Torchlight API. The token will be stashed as an environment variable in my Netlify configuration, but I'll also stick it in a local .env file for use with local builds:
echo &#34;TORCHLIGHT_TOKEN=torch_[...]&#34; &gt; ./.env # [tl! .cmd] Installation I then used npm to install Torchlight in the root of my Hugo repo:
npm i @torchlight-api/torchlight-cli # [tl! .cmd] # [tl! .nocopy:1] added 94 packages in 5s That created a few new files and directories that I don't want to sync with the repo, so I added those to my .gitignore configuration. I'll also be sure to add that .env file so that I don't commit any secrets!
# torchlight! {&#34;lineNumbers&#34;: true} # .gitignore .hugo_build.lock /node_modules/ [tl! ++:2] /package-lock.json /package.json /public/ /resources/ /.env [tl! ++] The installation instructions↗ say to then initialize Torchlight like so:
npx torchlight init # [tl! .cmd focus] # [tl! .nocopy:start] node:internal/fs/utils:350 throw err; ^ Error: ENOENT: no such file or directory, open &#39;/home/john/projects/runtimeterror/node_modules/@torchlight-api/torchlight-cli/dist/stubs/config.js&#39; # [tl! focus] at Object.openSync (node:fs:603:3) at Object.readFileSync (node:fs:471:35) at write (/home/john/projects/runtimeterror/node_modules/@torchlight-api/torchlight-cli/dist/bin/torchlight.cjs.js:524:39) at init (/home/john/projects/runtimeterror/node_modules/@torchlight-api/torchlight-cli/dist/bin/torchlight.cjs.js:538:12) at Command.&lt;anonymous&gt; (/home/john/projects/runtimeterror/node_modules/@torchlight-api/torchlight-cli/dist/bin/torchlight.cjs.js:722:12) at Command.listener [as _actionHandler] (/home/john/projects/runtimeterror/node_modules/commander/lib/command.js:488:17) at /home/john/projects/runtimeterror/node_modules/commander/lib/command.js:1227:65 at Command._chainOrCall (/home/john/projects/runtimeterror/node_modules/commander/lib/command.js:1144:12) at Command._parseCommand (/home/john/projects/runtimeterror/node_modules/commander/lib/command.js:1227:27) at Command._dispatchSubcommand (/home/john/projects/runtimeterror/node_modules/commander/lib/command.js:1050:25) { errno: -2, syscall: &#39;open&#39;, code: &#39;ENOENT&#39;, path: &#39;/home/john/projects/runtimeterror/node_modules/@torchlight-api/torchlight-cli/dist/stubs/config.js&#39; } Node.js v18.17.1 # [tl! .nocopy:end] Oh. Hmm.
There's an open issue↗ which reveals that the stub config file is actually located under the src/ directory instead of dist/. And it turns out the init step isn't strictly necessary, it's just a helper to get you a working config to start.
Configuration Now that I know where the stub config lives, I can simply copy it to my repo root. I'll then get to work modifying it to suit my needs:
cp node_modules/@torchlight-api/torchlight-cli/src/stubs/config.js ./torchlight.config.js # [tl! .cmd] // torchlight! {&#34;lineNumbers&#34;: true} // torchlight.config.js module.exports = { // Your token from https://torchlight.dev token: process.env.TORCHLIGHT_TOKEN, // this will come from a netlify build var [tl! highlight focus] // The Torchlight client caches highlighted code blocks. Here you // can define which directory you&#39;d like to use. You&#39;ll likely // want to add this directory to your .gitignore. Set to // \`false\` to use an in-memory cache. You may also // provide a full cache implementation. cache: &#39;cache&#39;, // [tl! -- focus:1] cache: false, // disable cache for netlify builds [tl! ++ reindex(-1)] // Which theme you want to use. You can find all of the themes at // https://torchlight.dev/docs/themes. theme: &#39;material-theme-palenight&#39;, // [tl! -- focus:1] theme: &#39;one-dark-pro&#39;, // switch up the theme [tl! ++ reindex(-1)] // The Host of the API. host: &#39;https://api.torchlight.dev&#39;, // Global options to control block-level settings. // https://torchlight.dev/docs/options options: { // Turn line numbers on or off globally. lineNumbers: false, // Control the \`style\` attribute applied to line numbers. // lineNumbersStyle: &#39;&#39;, // Turn on +/- diff indicators. diffIndicators: true, // If there are any diff indicators for a line, put them // in place of the line number to save horizontal space. diffIndicatorsInPlaceOfLineNumbers: true // [tl! --] diffIndicatorsInPlaceOfLineNumbers: true, // [tl! ++ reindex(-1)] // When lines are collapsed, this is the text that will // be shown to indicate that they can be expanded. // summaryCollapsedIndicator: &#39;...&#39;, [tl! --] summaryCollapsedIndicator: &#39;Click to expand...&#39;, // make the collapse a little more explicit [tl! ++ reindex(-1)] }, // Options for the highlight command. highlight: { // Directory where your un-highlighted source files live. If // left blank, Torchlight will use the current directory. input: &#39;&#39;, // [tl! -- focus:1] input: &#39;public&#39;, // tells Torchlight where to find Hugo&#39;s processed HTML output [tl! ++ reindex(-1)] // Directory where your highlighted files should be placed. If // left blank, files will be modified in place. output: &#39;&#39;, // Globs to include when looking for files to highlight. includeGlobs: [ &#39;**/*.htm&#39;, &#39;**/*.html&#39; ], // String patterns to ignore (not globs). The entire file // path will be searched and if any of these strings // appear, the file will be ignored. excludePatterns: [ &#39;/node_modules/&#39;, &#39;/vendor/&#39; ] } } You can find more details about the configuration options here↗.
Stylization It's not strictly necessary for the basic functionality, but applying a little bit of extra CSS to match up with the classes leveraged by Torchlight can help to make things look a bit more polished. Fortunately for this fake-it-til-you-make-it dev, Torchlight provides sample CSS that work great for this:
Basic CSS↗ for generally making things look tidy Focus CSS↗ for that slick blur/focus effect Collapse CSS↗ for some accordion action Put those blocks together (along with a few minor tweaks), and here's what I started with in assets/css/torchlight.css:
// torchlight! {&#34;lineNumbers&#34;: true} /********************************************* * Basic styling for Torchlight code blocks. * **********************************************/ /* Margin and rounding are personal preferences, overflow-x-auto is recommended. */ pre { border-radius: 0.25rem; margin-top: 1rem; margin-bottom: 1rem; overflow-x: auto; } /* Add some vertical padding and expand the width to fill its container. The horizontal padding comes at the line level so that background colors extend edge to edge. */ pre.torchlight { display: block; min-width: -webkit-max-content; min-width: -moz-max-content; min-width: max-content; padding-top: 1rem; padding-bottom: 1rem; } /* Horizontal line padding to match the vertical padding from the code block above. */ pre.torchlight .line { padding-left: 1rem; padding-right: 1rem; } /* Push the code away from the line numbers and summary caret indicators. */ pre.torchlight .line-number, pre.torchlight .summary-caret { margin-right: 1rem; } /********************************************* * Focus styling * **********************************************/ /* Blur and dim the lines that don&#39;t have the \`.line-focus\` class, but are within a code block that contains any focus lines. */ .torchlight.has-focus-lines .line:not(.line-focus) { transition: filter 0.35s, opacity 0.35s; filter: blur(.095rem); opacity: .65; } /* When the code block is hovered, bring all the lines into focus. */ .torchlight.has-focus-lines:hover .line:not(.line-focus) { filter: blur(0px); opacity: 1; } /********************************************* * Collapse styling * **********************************************/ .torchlight summary:focus { outline: none; } /* Hide the default markers, as we provide our own */ .torchlight details &gt; summary::marker, .torchlight details &gt; summary::-webkit-details-marker { display: none; } .torchlight details .summary-caret::after { pointer-events: none; } /* Add spaces to keep everything aligned */ .torchlight .summary-caret-empty::after, .torchlight details .summary-caret-middle::after, .torchlight details .summary-caret-end::after { content: &#34; &#34;; } /* Show a minus sign when the block is open. */ .torchlight details[open] .summary-caret-start::after { content: &#34;-&#34;; } /* And a plus sign when the block is closed. */ .torchlight details:not([open]) .summary-caret-start::after { content: &#34;+&#34;; } /* Hide the [...] indicator when open. */ .torchlight details[open] .summary-hide-when-open { display: none; } /* Show the [...] indicator when closed. */ .torchlight details:not([open]) .summary-hide-when-open { display: initial; } /********************************************* * Additional styling * **********************************************/ /* Fix for disjointed horizontal scrollbars */ .highlight div { overflow-x: visible; } I'll make sure that this CSS gets dynamically attached to any pages with a code block by adding this to the bottom of my layouts/partials/head.html:
&lt;!-- syntax highlighting --&gt; {{ if (findRE &#34;&lt;pre&#34; .Content 1) }} {{ $syntax := resources.Get &#34;css/torchlight.css&#34; | minify }} &lt;link href=&#34;{{ $syntax.RelPermalink }}&#34; rel=&#34;stylesheet&#34;&gt; {{ end }} As a bit of housekeeping, I'm also going to remove the built-in highlighter configuration from my config/_default/markup.toml file to make sure it doesn't conflict with Torchlight:
# torchlight! {&#34;lineNumbers&#34;: true} # config/_default/markup.toml [goldmark] [goldmark.renderer] hardWraps = false unsafe = true xhtml = false [goldmark.extensions] typographer = false [highlight] # [tl! --:start] anchorLineNos = true codeFences = true guessSyntax = true hl_Lines = &#39;&#39; lineNos = false lineNoStart = 1 lineNumbersInTable = false noClasses = false tabwidth = 2 style = &#39;monokai&#39; # [tl! --:end] # Table of contents # [tl! reindex(10)] # Add toc = true to content front matter to enable [tableOfContents] endLevel = 5 ordered = false startLevel = 3 Building Now that the pieces are in place, it's time to start building!
Local I like to preview my blog as I work on it so that I know what it will look like before I hit git push and let Netlify do its magic. And Hugo has been fantastic for that! But since I'm offloading the syntax highlighting to the Torchlight API, I'll need to manually build the site instead of relying on Hugo's instant preview builds.
There are a couple of steps I'll use for this:
First, I'll source .env to load the TORCHLIGHT_TOKEN for the API. Then, I'll use hugo --minify --environment local -D to render my site into the public/ directory. Next, I'll call npx torchlight to parse the HTML files in public/, extract the content of any &lt;pre&gt;/&lt;code&gt; blocks, send it to the Torchlight API to work the magic, and write the formatted code blocks back to the existing HTML files. Finally, I use python3 -m http.server --directory public 1313 to serve the public/ directory so I can view the content at http://localhost:1313. I'm lazy, though, so I'll even put that into a quick build.sh script to help me run local builds:
# torchlight! {&#34;lineNumbers&#34;: true} #!/usr/bin/env bash # Quick script to run local builds source .env hugo --minify --environment local -D npx torchlight python3 -m http.server --directory public 1313 Now I can just make the script executable and fire it off:
chmod +x build.sh # [tl! focus:3 .cmd:1] ./build.sh Start building sites … # [tl! .nocopy:start] hugo v0.111.3+extended linux/amd64 BuildDate=unknown VendorInfo=nixpkgs | EN -------------------+------ Pages | 202 Paginator pages | 0 Non-page files | 553 Static files | 49 Processed images | 0 Aliases | 5 Sitemaps | 1 Cleaned | 0 Total in 248 ms Highlighting index.html Highlighting 3d-modeling-and-printing-on-chrome-os/index.html Highlighting 404/index.html Highlighting about/index.html # [tl! collapse:start] + + + O o &#39; ________________ _ \\__(=======/_=_/____.--&#39;-\`--.___ \\ \\ \`,--,-.___.----&#39; .--\`\\\\--&#39;../ | &#39;---._____.|] -0- |o * | -0- -O- &#39; o 0 | &#39; . -0- . &#39; Did you really want to see the full file list? Highlighting tags/vsphere/index.html # [tl! collapse:end] Highlighting tags/windows/index.html Highlighting tags/wireguard/index.html Highlighting tags/wsl/index.html # [tl! focus:1] Writing to /home/john/projects/runtimeterror/public/abusing-chromes-custom-search-engines-for-fun-and-profit/index.html Writing to /home/john/projects/runtimeterror/public/auto-connect-to-protonvpn-on-untrusted-wifi-with-tasker/index.html Writing to /home/john/projects/runtimeterror/public/cat-file-without-comments/index.html # [tl! collapse:start] &#39; * + -O- | o o . ___________ 0 o . +/-/_&#34;/-/_/-/| -0- o -O- * * /&#34;-/-_&#34;/-_//|| . -O- /__________/|/| + | * |&#34;|_&#39;=&#39;-]:+|/|| . o -0- . * |-+-|.|_&#39;-&#34;||// + | | &#39; &#39; 0 |[&#34;.[:!+-&#39;=|// | -0- 0 -O- |=&#39;!+|-:]|-|/ -0- o |-0- 0 -O- ---------- * | -O| + o o -O- -0- -0- -O- | + | -O- | -0- -0- . O -O- | -O- * your code will be assimilated Writing to /home/john/projects/runtimeterror/public/k8s-on-vsphere-node-template-with-packer/index.html # [tl! collapse:end] Writing to /home/john/projects/runtimeterror/public/tanzu-community-edition-k8s-homelab/index.html Serving HTTP on 0.0.0.0 port 1313 (http://0.0.0.0:1313/) ... # [tl! focus:1] 127.0.0.1 - - [07/Nov/2023 20:34:29] &#34;GET /spotlight-on-torchlight/ HTTP/1.1&#34; 200 - Netlify Setting up Netlify to leverage the Torchlight API is kind of similar. I'll start with logging in to the Netlify dashboard↗ and navigating to Site Configuration &gt; Environment Variables. There, I'll click on Add a variable &gt; Add a ingle variable. I'll give the new variable a key of TORCHLIGHT_TOKEN and set its value to the token I obtained earlier.
Once that's done, I edit the netlify.toml file at the root of my site repo to alter the build commands:
# torchlight! {&#34;lineNumbers&#34;: true} [build] publish = &#34;public&#34; [build.environment] HUGO_VERSION = &#34;0.111.3&#34; [context.production] # [tl! focus:6] command = &#34;hugo&#34; # [tl! -- ++:1,5 reindex(-1):1,1] command = &#34;&#34;&#34; hugo --minify npm i @torchlight-api/torchlight-cli npx torchlight &#34;&#34;&#34; Now when I git push new content, Netlify will use Hugo to build the site, then install and call Torchlight to ++fancy; the code blocks before the site gets served. Very nice!
#Goals Of course, I. Just. Can't. leave well enough alone, so my work here isn't finished - not by a long shot.
You see, I'm a sucker for handy &quot;copy&quot; buttons attached to code blocks, and that's not something that Torchlight does (it just returns rendered HTML, remember? No fancy JavaScript here). I also wanted to add informative prompt indicators (like $ and #) to code blocks representing command-line inputs (rather than script files). And I'd like to flag text returned by a command so that only the commands get copied, effectively ignoring the returned text, diff-removed lines, diff markers, line numbers, and prompt indicators.
I had previously implemented a solution based heavily on Justin James' blog post, Hugo - Dynamically Add Copy Code Snippet Button↗. Getting that Chroma-focused solution to work well with Torchlight-formatted code blocks took some work, particularly since I'm inept at web development and can barely spell &quot;CSS&quot; and &quot;JavaScrapped&quot;.
But I5 eventually fumbled through the changes required to meet my #goals, and I'm pretty happy with how it all works.
Custom classes Remember Torchlight's in-line annotations that I mentioned earlier? They're pretty capable out of the box, but can also be expanded through the use of custom classes↗. This makes it easy to selectively apply special handling to selected lines of code, something that's otherwise pretty dang tricky to do with Chroma.
So, for instance, I could add a class .cmd for standard user-level command-line inputs:
# torchlight! {&#34;torchlightAnnotations&#34;:false} sudo make me a sandwich # [tl! .cmd] sudo make me a sandwich # [tl! .cmd] Or .cmd_root for a root prompt:
# torchlight! {&#34;torchlightAnnotations&#34;: false} wall &#34;Make your own damn sandwich.&#34; # [tl! .cmd_root] wall &#34;Make your own damn sandwich.&#34; # [tl! .cmd_root] And for deviants:
# torchlight! {&#34;torchlightAnnotations&#34;: false} Write-Host -ForegroundColor Green &#34;A taco is a sandwich&#34; # [tl! .cmd_pwsh] Write-Host -ForegroundColor Green &#34;A taco is a sandwich&#34; # [tl! .cmd_pwsh] I also came up with a cleverly-named .nocopy class for the returned lines that shouldn't be copyable:
# torchlight! {&#34;torchlightAnnotations&#34;: false} copy this # [tl! .cmd] but not this # [tl! .nocopy] copy this # [tl! .cmd] but not this # [tl! .nocopy] So that's how I'll tie my custom classes to individual lines of code6, but I still need to actually define those classes.
I'll drop those at the bottom of the assets/css/torchlight.css file I created earlier:
// torchlight! {&#34;lineNumbers&#34;: true} /* [tl! collapse:start] /********************************************* * Basic styling for Torchlight code blocks. * **********************************************/ /* Margin and rounding are personal preferences, overflow-x-auto is recommended. */ pre { border-radius: 0.25rem; margin-top: 1rem; margin-bottom: 1rem; overflow-x: auto; } /* Add some vertical padding and expand the width to fill its container. The horizontal padding comes at the line level so that background colors extend edge to edge. */ pre.torchlight { display: block; min-width: -webkit-max-content; min-width: -moz-max-content; min-width: max-content; padding-top: 1rem; padding-bottom: 1rem; } /* Horizontal line padding to match the vertical padding from the code block above. */ pre.torchlight .line { padding-left: 1rem; padding-right: 1rem; } /* Push the code away from the line numbers and summary caret indicators. */ pre.torchlight .line-number, pre.torchlight .summary-caret { margin-right: 1rem; } /********************************************* * Focus styling * **********************************************/ /* Blur and dim the lines that don&#39;t have the \`.line-focus\` class, but are within a code block that contains any focus lines. */ .torchlight.has-focus-lines .line:not(.line-focus) { transition: filter 0.35s, opacity 0.35s; filter: blur(.095rem); opacity: .65; } /* When the code block is hovered, bring all the lines into focus. */ .torchlight.has-focus-lines:hover .line:not(.line-focus) { filter: blur(0px); opacity: 1; } /********************************************* * Collapse styling * **********************************************/ .torchlight summary:focus { outline: none; } /* Hide the default markers, as we provide our own */ .torchlight details &gt; summary::marker, .torchlight details &gt; summary::-webkit-details-marker { display: none; } .torchlight details .summary-caret::after { pointer-events: none; } /* Add spaces to keep everything aligned */ .torchlight .summary-caret-empty::after, .torchlight details .summary-caret-middle::after, .torchlight details .summary-caret-end::after { content: &#34; &#34;; } /* Show a minus sign when the block is open. */ .torchlight details[open] .summary-caret-start::after { content: &#34;-&#34;; } /* And a plus sign when the block is closed. */ .torchlight details:not([open]) .summary-caret-start::after { content: &#34;+&#34;; } /* Hide the [...] indicator when open. */ .torchlight details[open] .summary-hide-when-open { display: none; } /* Show the [...] indicator when closed. */ .torchlight details:not([open]) .summary-hide-when-open { display: initial; } /* [tl! collapse:end] /********************************************* * Additional styling * **********************************************/ /* Fix for disjointed horizontal scrollbars */ .highlight div { overflow-x: visible; } /* [tl! focus:start] Insert prompt indicators on interactive shells. */ .cmd::before { color: var(--base07); content: &#34;$ &#34;; } .cmd_root::before { color: var(--base08); content: &#34;# &#34;; } .cmd_pwsh::before { color: var(--base07); content: &#34;PS&gt; &#34;; } /* Don&#39;t copy shell outputs */ .nocopy { webkit-user-select: none; user-select: none; } /* [tl! focus:end] The .cmd classes will simply insert the respective prompt before each flagged line, and the .nocopy class will prevent those lines from being selected (and copied). Now for the tricky part...
Copy that blocky There are two major pieces for the code-copy wizardry: the CSS to style/arrange the copy button and language label, and the JavaScript to make it work.
I put the CSS in assets/css/code-copy-button.css:
// torchlight! {&#34;lineNumbers&#34;: true} /* adapted from https://digitaldrummerj.me/hugo-add-copy-code-snippet-button/ */ .highlight { position: relative; z-index: 0; padding: 0; margin:40px 0 10px 0; border-radius: 4px; } .copy-code-button { position: absolute; z-index: -1; right: 0px; top: -26px; font-size: 13px; font-weight: 700; line-height: 14px; letter-spacing: 0.5px; width: 65px; color: var(--fg); background-color: var(--bg); border: 1.25px solid var(--off-bg); border-top-left-radius: 4px; border-top-right-radius: 4px; border-bottom-right-radius: 0px; border-bottom-left-radius: 0px; white-space: nowrap; padding: 6px 6px 7px 6px; margin: 0 0 0 1px; cursor: pointer; opacity: 0.6; } .copy-code-button:hover, .copy-code-button:focus, .copy-code-button:active, .copy-code-button:active:hover { color: var(--off-bg); background-color: var(--off-fg); opacity: 0.8; } .copyable-text-area { position: absolute; height: 0; z-index: -1; opacity: .01; } .torchlight [data-lang]:before { position: absolute; z-index: -1; top: -26px; left: 0px; content: attr(data-lang); font-size: 13px; font-weight: 700; color: var(--fg); background-color: var(--bg); border-top-left-radius: 4px; border-top-right-radius: 4px; border-bottom-left-radius: 0; border-bottom-right-radius: 0; padding: 6px 6px 7px 6px; line-height: 14px; opacity: 0.6; position: absolute; letter-spacing: 0.5px; border: 1.25px solid var(--off-bg); margin: 0 0 0 1px; } And, as before, I'll link this from the bottom of my layouts/partial/head.html so it will get loaded on the appropriate pages:
&lt;!-- syntax highlighting --&gt; {{ if (findRE &#34;&lt;pre&#34; .Content 1) }} {{ $syntax := resources.Get &#34;css/torchlight.css&#34; | minify }} &lt;link href=&#34;{{ $syntax.RelPermalink }}&#34; rel=&#34;stylesheet&#34;&gt; {{ $copyCss := resources.Get &#34;css/code-copy-button.css&#34; | minify }} &lt;!-- [tl! ++:1 ] --&gt; &lt;link href=&#34;{{ $copyCss.RelPermalink }}&#34; rel=&#34;stylesheet&#34;&gt; {{ end }} Code behind the copy That sure makes the code blocks and accompanying button / labels look pretty great, but I still need to actually make the button work. For that, I'll need some JavaScript that (again) largely comes from Justin's post.
With all the different classes and things used with Torchlight, it took a lot of (generally misguided) tinkering for me to get the script to copy just the text I wanted (and nothing else). I learned a ton in the process, so I've highlighted the major deviations from Justin's script.
Anyway, here's my assets/js/code-copy-button.js:
// torchlight! {&#34;lineNumbers&#34;: true} // adapted from https://digitaldrummerj.me/hugo-add-copy-code-snippet-button/ function createCopyButton(highlightDiv) { const button = document.createElement(&#34;button&#34;); button.className = &#34;copy-code-button&#34;; button.type = &#34;button&#34;; button.innerText = &#34;Copy&#34;; button.addEventListener(&#34;click&#34;, () =&gt; copyCodeToClipboard(button, highlightDiv)); highlightDiv.insertBefore(button, highlightDiv.firstChild); const wrapper = document.createElement(&#34;div&#34;); wrapper.className = &#34;highlight-wrapper&#34;; highlightDiv.parentNode.insertBefore(wrapper, highlightDiv); wrapper.appendChild(highlightDiv); } document.querySelectorAll(&#34;.highlight&#34;).forEach((highlightDiv) =&gt; createCopyButton(highlightDiv)); // [tl! focus:start] async function copyCodeToClipboard(button, highlightDiv) { // capture all code lines in the selected block which aren&#39;t classed \`nocopy\` or \`line-remove\` let codeToCopy = highlightDiv.querySelectorAll(&#34;:last-child &gt; .torchlight &gt; code &gt; .line:not(.nocopy, .line-remove)&#34;); // now remove the first-child of each line if it is of class \`line-number\` codeToCopy = Array.from(codeToCopy).reduce((accumulator, line) =&gt; { if (line.firstChild.className != &#34;line-number&#34;) { return accumulator + line.innerText + &#34;\\n&#34;; } else { return accumulator + Array.from(line.children).filter( (child) =&gt; child.className != &#34;line-number&#34;).reduce( (accumulator, child) =&gt; accumulator + child.innerText, &#34;&#34;) + &#34;\\n&#34;; } }, &#34;&#34;); // [tl! focus:end] try { var result = await navigator.permissions.query({ name: &#34;clipboard-write&#34; }); if (result.state == &#34;granted&#34; || result.state == &#34;prompt&#34;) { await navigator.clipboard.writeText(codeToCopy); } else { button.blur(); button.innerText = &#34;Error!&#34;; setTimeout(function () { button.innerText = &#34;Copy&#34;; }, 2000); } } catch (_) { button.blur(); button.innerText = &#34;Error!&#34;; setTimeout(function () { button.innerText = &#34;Copy&#34;; }, 2000); } finally { button.blur(); button.innerText = &#34;Copied!&#34;; setTimeout(function () { button.innerText = &#34;Copy&#34;; }, 2000); } } And this script gets called from the bottom of my layouts/partials/footer.html:
{{ if (findRE &#34;&lt;pre&#34; .Content 1) }} {{ $jsCopy := resources.Get &#34;js/code-copy-button.js&#34; | minify }} &lt;script src=&#34;{{ $jsCopy.RelPermalink }}&#34;&gt;&lt;/script&gt; {{ end }} Going live! And at this point, I can just run my build.sh script again to rebuild the site locally and verify that it works as well as I think it does.
It looks pretty good to me, so I'll go ahead and push this up to Netlify. If all goes well, this post and the new code block styling will go live at the same time.
See you on the other side!
Did I mention that it's fast?&#160;&#x21a9;&#xfe0e;
(or how to count to eleven)&#160;&#x21a9;&#xfe0e;
Spoiler: I'm going to tack on some JS and CSS nonsense later - we'll get to that.&#160;&#x21a9;&#xfe0e;
Torchlight is free for sites which don't generate revenue, though it does require a link back to torchlight.dev. I stuck the attribution link in the footer. More pricing info here↗.&#160;&#x21a9;&#xfe0e;
With a little help from my Copilot buddy...&#160;&#x21a9;&#xfe0e;
Or ranges of lines, using the same syntax as before: [tl! .nocopy:5] will make this line and the following five uncopyable.&#160;&#x21a9;&#xfe0e;
`,description:"Syntax highlighting powered by the Torchlight.dev API makes it easier to dress up code blocks. Here's an overview of what I did to replace this blog's built-in Hugo highlighter (Chroma) with Torchlight.",url:"https://runtimeterror.dev/spotlight-on-torchlight/"},"https://runtimeterror.dev/systemctl-edit-delay-service-startup/":{title:"Using `systemctl edit` to Delay Service Startup",tags:["crostini","linux","tailscale"],content:`Following a recent update, I found that the Linux development environment↗ on my Framework Chromebook would fail to load if the Tailscale daemon was already running. It seems that the Tailscale virtual interface may have interfered with how the CrOS Terminal app was expecting to connect to the Linux container. I initially worked around the problem by just disabling the tailscaled service, but having to remember to start it up manually was a pretty heavy cognitive load.
Fortunately, it turns out that overriding the service to insert a short startup delay is really easy. I'll just use the systemctl edit command to create a quick override configuration:
sudo systemctl edit tailscaled # [tl! .cmd] This shows me the existing contents of the tailscaled.service definition so I can easily insert some overrides above. In this case, I just want to use sleep 5 as the ExecStartPre command so that the service start will be delayed by 5 seconds: Upon saving the file, it gets installed to /etc/systemd/system/tailscaled.service.d/override.conf. Now the Tailscale interface won't automatically come up until a few seconds later, and that's enough to let my Terminal app start up reliably once more.
Easy peasy.
`,description:"Quick notes on using `systemctl edit` to override a systemd service to delay its startup.",url:"https://runtimeterror.dev/systemctl-edit-delay-service-startup/"},"https://runtimeterror.dev/easy-push-notifications-with-ntfy/":{title:"Easy Push Notifications With ntfy.sh",tags:["android","api","automation","caddy","containers","docker","homeassistant","linux","rest","selfhosting","shell"],content:` The Pitch Wouldn't it be great if there was a simple way to send a notification to your phone(s) with just a curl call? Then you could get notified when a script completes, a server reboots, a user logs in to a system, or a sensor connected to Home Assistant changes state. How great would that be??
ntfy.sh↗ (pronounced notify) provides just that. It's an open-source↗, easy-to-use, HTTP-based notification service, and it can notify using mobile apps for Android (Play↗ or F-Droid↗) or iOS (App Store↗) or a web app↗.
I thought it sounded pretty compelling - and then I noticed that ntfy's docs↗ made it sound really easy to self-host the server component, which would give me a bit more control and peace of mind.
Topics are public
Ntfy leverages uses a pub-sub↗ approach, and (by default) all topics are public. This means that anyone can write to or read from any topic, which makes it important to use a topic name that others aren't likely to guess.
Self-hosting lets you define ACLs↗ to protect sensitive topics.
So let's take it for a spin!
The Setup I'm going to use the Docker setup↗ on a small cloud server and use Caddy↗ as a reverse proxy. I'll also configure ntfy to require authentication so that randos (hi!) won't be able to harass me with notifications.
Ntfy in Docker So I'll start by creating a new directory at /opt/ntfy/ to hold the goods, and create a compose config.
sudo mkdir -p /opt/ntfy # [tl! .cmd:1] sudo vim /opt/ntfy/docker-compose.yml # torchlight! {&#34;lineNumbers&#34;: true} # /opt/ntfy/docker-compose.yml version: &#34;2.3&#34; services: ntfy: image: binwiederhier/ntfy container_name: ntfy command: - serve environment: - TZ=UTC # optional, set desired timezone volumes: - ./cache/ntfy:/var/cache/ntfy - ./etc/ntfy:/etc/ntfy - ./lib/ntf:/var/lib/ntfy ports: - 2586:80 healthcheck: # this should be the port inside the container, not the host port test: [ &#34;CMD-SHELL&#34;, &#34;wget -q --tries=1 http://localhost:80/v1/health -O - | grep -Eo &#39;\\&#34;healthy\\&#34;\\\\s*:\\\\s*true&#39; || exit 1&#34; ] interval: 60s timeout: 10s retries: 3 start_period: 40s restart: unless-stopped This config will create/mount folders in the working directory to store the ntfy cache and config. It also maps localhost:2586 to port 80 on the container, and enables a simple healthcheck against the ntfy health API endpoint. This will ensure that the container will be automatically restarted if it stops working.
I can go ahead and bring it up:
sudo docker-compose up -d # [tl! focus:start .cmd] Creating network &#34;ntfy_default&#34; with the default driver # [tl! .nocopy:start] Pulling ntfy (binwiederhier/ntfy:)... latest: Pulling from binwiederhier/ntfy # [tl! focus:end] 7264a8db6415: Pull complete 1ac6a3b2d03b: Pull complete Digest: sha256:da08556da89a3f7317557fd39cf302c6e4691b4f8ce3a68aa7be86c4141e11c8 Status: Downloaded newer image for binwiederhier/ntfy:latest # [tl! focus:1] Creating ntfy ... done # [tl! .nocopy:end] Caddy Reverse Proxy I'll also want to add the following↗ to my Caddy config:
# torchlight! {&#34;lineNumbers&#34;: true} # /etc/caddy/Caddyfile ntfy.runtimeterror.dev, http://ntfy.runtimeterror.dev { reverse_proxy localhost:2586 # Redirect HTTP to HTTPS, but only for GET topic addresses, since we want # it to work with curl without the annoying https:// prefix @httpget { protocol http method GET path_regexp ^/([-_a-z0-9]{0,64}$|docs/|static/) } redir @httpget https://{host}{uri} } And I'll restart Caddy to apply the config:
sudo systemctl restart caddy # [tl! .cmd] Now I can point my browser to https://ntfy.runtimeterror.dev and see the web interface:
I can subscribe to a new topic: And publish a message to it:
curl -d &#34;Hi&#34; https://ntfy.runtimeterror.dev/testy # [tl! .cmd] {&#34;id&#34;:&#34;80bUl6cKwgBP&#34;,&#34;time&#34;:1694981305,&#34;expires&#34;:1695024505,&#34;event&#34;:&#34;message&#34;,&#34;topic&#34;:&#34;testy&#34;,&#34;message&#34;:&#34;Hi&#34;} # [tl! .nocopy] Which will then show up as a notification in my browser: Post-deploy Configuration So now I've got my own ntfy server, and I've verified that it works for unauthenticated notifications. I don't really want to operate anything on the internet without requiring authentication, though, so I'm going to configure ntfy to prevent unauthenticated reads and writes.
I'll start by creating a server.yml config file which will be mounted into the container. This config will specify where to store the user database and switch the default ACL to deny-all:
# torchlight! {&#34;lineNumbers&#34;: true} # /opt/ntfy/etc/ntfy/server.yml auth-file: &#34;/var/lib/ntfy/user.db&#34; auth-default-access: &#34;deny-all&#34; base-url: &#34;https://ntfy.runtimeterror.dev&#34; I can then restart the container, and try again to subscribe to the same (or any other topic):
sudo docker-compose down &amp;&amp; sudo docker-compose up -d # [tl! .cmd] Now I get prompted to log in: I'll need to use the ntfy CLI to create/manage entries in the user DB, and that means first grabbing a shell inside the container:
sudo docker exec -it ntfy /bin/sh # [tl! .cmd] For now, I'm going to create three users: one as an administrator, one as a &quot;writer&quot;, and one as a &quot;reader&quot;. I'll be prompted for a password for each:
ntfy user add --role=admin administrator # [tl! .cmd] user administrator added with role admin # [tl! .nocopy:1] ntfy user add writer # [tl! .cmd] user writer added with role user # [tl! .nocopy:1] ntfy user add reader # [tl! .cmd] user reader added with role user # [tl! .nocopy] The admin user has global read+write access, but right now the other two can't do anything. Let's make it so that writer can write to all topics, and reader can read from all topics:
ntfy access writer &#39;*&#39; write # [tl! .cmd:1] ntfy access reader &#39;*&#39; read I could lock these down further by selecting specific topic names instead of '*' but this will do fine for now.
Let's go ahead and verify the access as well:
ntfy access # [tl! .cmd] user administrator (role: admin, tier: none) # [tl! .nocopy:8] - read-write access to all topics (admin role) user reader (role: user, tier: none) - read-only access to topic * user writer (role: user, tier: none) - write-only access to topic * user * (role: anonymous, tier: none) - no topic-specific permissions - no access to any (other) topics (server config) While I'm at it, I also want to configure an access token to be used with the writer account. I'll be able to use that instead of username+password when publishing messages.
ntfy token add writer # [tl! .cmd] token tk_mm8o6cwxmox11wrnh8miehtivxk7m created for user writer, never expires # [tl! .nocopy] I can go back to the web, subscribe to the testy topic again using the reader credentials, and then test sending an authenticated notification with curl:
curl -H &#34;Authorization: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m&#34; \\ # [tl! .cmd] -d &#34;Once more, with auth!&#34; \\ https://ntfy.runtimeterror.dev/testy {&#34;id&#34;:&#34;0dmX9emtehHe&#34;,&#34;time&#34;:1694987274,&#34;expires&#34;:1695030474,&#34;event&#34;:&#34;message&#34;,&#34;topic&#34;:&#34;testy&#34;,&#34;message&#34;:&#34;Once more, with auth!&#34;} # [tl! .nocopy] The Use Case Pushing notifications from the command line is neat, but how can I use this to actually make my life easier? Let's knock out quick quick configurations for a couple of the use cases I pitched at the top of the post: alerting me when a server has booted, and handling Home Assistant notifications in a better way.
Notify on Boot I'm sure there are a bunch of ways to get a Linux system to send a simple curl call on boot. I'm going to create a simple script that will be triggered by a systemd service definition.
Generic Push Script I may want to wind up having servers notify for a variety of conditions so I'll start with a generic script which will accept a notification title and message as arguments:
/usr/local/bin/ntfy_push.sh:
# torchlight! {&#34;lineNumbers&#34;: true} #!/usr/bin/env bash curl \\ -H &#34;Authorization: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m&#34; \\ -H &#34;Title: $1&#34; \\ -d &#34;$2&#34; \\ https://ntfy.runtimeterror.dev/server_alerts Note that I'm using a new topic name now: server_alerts. Topics are automatically created when messages are posted to them. I just need to make sure to subscribe to the topic in the web UI (or mobile app) so that I can receive these notifications.
Okay, now let's make it executable and then give it a quick test:
chmod +x /usr/local/bin/ntfy_push.sh # [tl! .cmd:1] /usr/local/bin/ntfy_push.sh &#34;Script Test&#34; &#34;This is a test from the magic script I just wrote.&#34; Wrapper for Specific Message I don't know an easy way to tell a systemd service definition to pass arguments to a command, so I'll use a quick wrapper script to pass in the notification details:
/usr/local/bin/ntfy_boot_complete.sh:
# torchlight! {&#34;lineNumbers&#34;: true} #!/usr/bin/env bash TITLE=&#34;$(hostname -s)&#34; MESSAGE=&#34;System boot complete&#34; /usr/local/bin/ntfy_push.sh &#34;$TITLE&#34; &#34;$MESSAGE&#34; And this one should be executable as well:
chmod +x /usr/local/bin/ntfy_boot_complete.sh # [tl! .cmd] Service Definition Finally I can create and register the service definition so that the script will run at each system boot.
/etc/systemd/system/ntfy_boot_complete.service:
# torchlight! {&#34;lineNumbers&#34;: true} [Unit] After=network.target [Service] ExecStart=/usr/local/bin/ntfy_boot_complete.sh [Install] WantedBy=default.target sudo systemctl daemon-reload # [tl! .cmd:1] sudo systemctl enable --now ntfy_boot_complete.service And I can test it by rebooting my server. I should get a push notification shortly...
Nice! Now I won't have to continually ping a server to see if it's finished rebooting yet.
Home Assistant I've been using Home Assistant↗ for years, but have never been completely happy with the notifications built into the mobile app. Each instance of the app registers itself as a different notification endpoint, and it can become kind of cumbersome to keep those updated in the Home Assistant configuration.
Enabling ntfy as a notification handler is pretty straight-forward, and it will allow me to receive alerts on all my various devices without even needing to have the Home Assistant app installed.
Notify Configuration I'll add ntfy to Home Assistant by using the RESTful Notifications↗ integration. For that, I just need to update my instance's configuration.yaml to configure the connection.
# torchlight! {&#34;lineNumbers&#34;: true} # configuration.yaml notify: - name: ntfy platform: rest method: POST_JSON headers: Authorization: !secret ntfy_token data: topic: home_assistant title_param_name: title message_param_name: message resource: https://ntfy.runtimeterror.dev The Authorization line references a secret stored in secrets.yaml:
# torchlight! {&#34;lineNumbers&#34;: true} # secrets.yaml ntfy_token: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m After reloading the Home Assistant configuration, I can use Developer Tools &gt; Services to send a test notification:
Automation Configuration I'll use the Home Assistant UI to push a notification through ntfy if any of my three water leak sensors switch from Dry to Wet:
The business end of this is the service call at the end:
# torchlight! {&#34;lineNumbers&#34;: true} service: notify.ntfy data: title: Leak detected! message: &#34;{{ trigger.to_state.attributes.friendly_name }} detected.&#34; The Wrap-up That was pretty easy, right? It didn't take a lot of effort to set up a self-hosted notification server that can be triggered by a simple authenticated HTTP POST, and now my brain is fired up thinking about all the other ways I can use this to stay informed about what's happening on my various systems.
Maybe my notes can help you get started with ntfy.sh, and I hope you'll let me know in the comments if you come up with any other killer use cases. Thanks for reading.
`,description:"Deploying and configuring a self-hosted pub-sub notification handler, getting another server to send a notifcation when it boots, and integrating the notification handler into Home Assistant.",url:"https://runtimeterror.dev/easy-push-notifications-with-ntfy/"},"https://runtimeterror.dev/virtuallypotato-runtimeterror/":{title:"virtuallypotato -> runtimeterror",tags:["meta"],content:`cp -a virtuallypotato.com runtimeterror.dev # [tl! .cmd:2] rm -rf virtuallypotato.com ln -s virtuallypotato.com runtimeterror.dev If you've noticed that things look a bit different around here, you might also have noticed that my posts about VMware products had become less and less frequent over the past year or so. That wasn't intentional, but a side-effect of some shifting priorities with a new position at work. I'm no longer on the team responsible for our VMware environment and am now more focused on cloud-native technologies and open-source DevOps solutions. The new role keeps me pretty busy, and I'm using what free time I have to learn more about and experiment with the technologies I use at work.
That (unfortunately) means that I won't be posting much (if at all) about VMware-related things (including the vRA8 series of posts)1 going forward. Instead, expect to see more posts about things like containers, infrastructure-as-code, self-hosting, and miscellaneous tech projects that I play with.
I decided to migrate, rebrand, and re-theme my blog to reflect this change in focus. virtuallypotato used a theme heavily inspired by VMware design language↗, and I don't think it's a great fit for the current (and future) content anymore. That theme is also very feature-rich which provides a lot of capability out of the box but makes it a bit tricky to modify (and maintain) my personal tweaks. The new runtimeterror2 site uses a more minimal theme↗ which takes cues from terminals and markdown formatting. It's also simpler and thus easier for me to tweak. I've done a lot of that already and anticipating doing a bit more in the coming weeks, but I wanted to go ahead and make this thing &quot;live&quot; for now.
I've copied all content from the old site (including post comments), and set things up so that all existing links should seamlessly redirect. Hopefully that will make for a pretty easy transition, but please add a comment if you see anything that's not quite working right.
Stay buggy.
I had a lot of cool things I wanted to explore/document with vRA8 and I'm sad that I won't get to &quot;complete&quot; this series. I no longer run a vRA8 instance in my homelab (it's a resource hog and I needed the RAM for other things) though so I don't anticipate revisiting this.&#160;&#x21a9;&#xfe0e;
is a pun, and I'm a sucker for puns.&#160;&#x21a9;&#xfe0e;
`,description:"This blog has migrated from virtuallypotato.com to runtimeterror.dev.",url:"https://runtimeterror.dev/virtuallypotato-runtimeterror/"},"https://runtimeterror.dev/how-to-ask-for-help/":{title:"How to Ask For Help",tags:[],content:`I spend a lot of my time and energy answering technical questions, both professionally and &quot;for fun&quot; as a way to scratch that troubleshooting itch. How a question is asked plays a big factor in how effectively I'll be able to answer it.
Years ago I came across Eric Steven Raymond's How To Ask Questions The Smart Way↗ and it really resonated with me. I wish everyone would read it before asking for technical help but I recognize it's a pretty large doc so that's an unrealistic wish. There are a few main points I'd like to emphasize though.
Good questions are a stimulus and a gift.
The sorts of people who choose to spend their time answering questions do it because they enjoy helping others. They thrive on interesting and challenging questions... but they're also busy, so may pass over questions that don't seem as interesting or have been asked (and answered!) countless times before. You're not just asking for an answer to your question - you're also asking for someone else to care enough about your problem to spend their time helping you. Put some time and effort into your question and you'll be more likely to get a helpful response.
Before You Ask... One of the best ways to demonstrate that your question isn't going to waste anyone's time is to show that you've already made an effort to find the solution on your own. Search the web, read the documentation, browse posts on the forum you're about to post on, and maybe try a few different ways of tackling the problem.
If you're still stuck, you can share in your message what you've already tried and why it didn't help. This will help people understand your problem and avoid suggesting solutions you've already tried.
Write a Clear Subject The subject line is your best chance to get someone to look at your question. If you're posting on a help forum, it's understood that you're seeking assistance; you don't need to put &quot;help&quot; anywhere in the subject line. Instead, use that space to clearly describe the problem you're facing, including any specific devices or software involved.
A good subject line is descriptive yet concise.
Not great: &quot;Help! Upgrade failed!&quot;
Much better: &quot;Upgrade to v2.0.1 failed with error code: 0x80004005&quot;
Use Correct Spelling and Grammar We need to understand what you're trying to ask before we can answer it, and that can be hard to do if your question is loaded with typos and mistakes. It doesn't have to be perfect, but please make an effort.
Once you've written your query, take the time to read back over it and ensure it makes sense. This is another chance to show us that you're serious about solving the problem.
Your helpers aren't going to judge you for linguistic errors but such mistakes may make it difficult for them to understand your problem. If you're not comfortable writing in English, go ahead and post the question in your own language using short sentences and correct punctuation. There are plenty of tools that your helper can use to translate your message, and those will be much more effective if the message is written clearly in a language you know well.
Be Precise The more details you can provide about your problem, the better. The necessary details may vary depending on your problem or issue, but some questions to ask yourself might include:
What system/device/application are you using? Be specific about the model and/or version (and please don't just say &quot;the latest version&quot; - share the complete version number). What are the symptoms of the problem? Are there any error messages? If so, include those in full. Screenshots can be a big help here. When did the problem start? Is this a new problem or has it never worked? What (if anything) changed before the problem started? What steps might someone take to reproduce the problem? List the complete steps without any shortcuts. What steps have you already taken to troubleshoot or solve the problem? Don't say &quot;I've tried everything&quot; (which tells us nothing); tell us exactly what you've tried. Did your research yield other reports of the same problem? Ifo so, include links to at least a few of those reports. Share anything you think might be relevant up front. It's easier (and quicker) for us to skip over details we don't need than to have to go back and forth asking for more information.
Describe the Symptoms, Not Your Guesses If you're not sure what's causing the problem, don't guess. Sharing a guess which isn't correct can lead people down the wrong path and waste everyone's time. Focus instead on accurately describing the symptoms and leave the diagnosis to the kind souls trying to help you.
Similarly, don't say &quot;I know it isn't _____&quot; unless you can also tell us precisely why you think that.
After all, if you knew exactly what was wrong then you probably wouldn't be asking for help in the first place.
Describe the Goal, Not the Step When trying to accomplish a particular goal, share that goal rather than just focusing on the step you're stuck on. It's easy to get fixated on a small step (which may or may not be the correct one to take) and miss the better solution. Describe the Big Picture version of what you hope to do and let us help you figure out the best way to get there.
Be Courteous Remember that you're asking for help, not demanding it. In many cases, you'll be posting on a user-to-user support community staffed entirely by volunteers. They are under no obligation to help you and will happily disengage if you're abusive or rude. Be polite and respectful of their time and effort.
Their replies to you may be terse or blunt. It's not because they are offended by your question or insulted that you would ask it, but rather because there are a lot of people asking for their help and they're trying to maximize their time. Assume that their responses are offered in good faith, just as they'll assume the same of yours. Don't take offense, but also don't be afraid to ask for clarification if you don't understand something.
Even if you are seeking help from a paid support team, remember that they are people too. Treat them with politeness and respect; not only is it the right way to interact with other humans, but it might keep them interested in helping you.
Follow Up with Solution Once your problem has been solved (either based on the advice you received or on your own), please be sure to report back to let everyone know. If the solution was reached based on the help you received, politely thank those who assisted. If you fixed it by yourself, share how you did it so that others may benefit from your experience (and, again, thank those who offered their time and advice).
Thank you for helping us help you help us all.
`,description:"There are no dumb questions - but there are smarter (and dumber) ways to ask them.",url:"https://runtimeterror.dev/how-to-ask-for-help/"},"https://runtimeterror.dev/cat-file-without-comments/":{title:"Cat a File Without Comments",tags:["linux","shell","regex"],content:`It's super handy when a Linux config file is loaded with comments to tell you precisely how to configure the thing, but all those comments can really get in the way when you're trying to review the current configuration.
Next time, instead of scrolling through page after page of lengthy embedded explanations, just use:
egrep -v &#34;^\\s*(#|$)&#34; $filename # [tl! .cmd] For added usefulness, I alias this command to ccat (which my brain interprets as &quot;commentless cat&quot;) in my ~/.zshrc↗:
alias ccat=&#39;egrep -v &#34;^\\s*(#|$)&#34;&#39; Now instead of viewing all 75 lines of a mostly-default Vagrantfile, I just see the 7 that matter:
wc -l Vagrantfile # [tl! .cmd] 75 Vagrantfile # [tl! .nocopy] ccat Vagrantfile # [tl! .cmd] Vagrant.configure(&#34;2&#34;) do |config| # [tl! .nocopy:start] config.vm.box = &#34;oopsme/windows11-22h2&#34; config.vm.provider :libvirt do |libvirt| libvirt.cpus = 4 libvirt.memory = 4096 end end # [tl! .nocopy:end] ccat Vagrantfile | wc -l # [tl! .cmd] 7 # [tl! .nocopy] Nice!
`,description:"A quick trick to strip out the comments when viewing the contents of a file.",url:"https://runtimeterror.dev/cat-file-without-comments/"},"https://runtimeterror.dev/create-vms-chromebook-hashicorp-vagrant/":{title:"Create Virtual Machines on a Chromebook with HashiCorp Vagrant",tags:["linux","chromeos","homelab","iac"],content:`I've lately been trying to do more with Salt↗ at work, but I'm still very much a novice with that tool. I thought it would be great to have a nice little portable lab environment where I could deploy a few lightweight VMs and practice managing them with Salt - without impacting any systems that are actually being used for anything. Along the way, I figured I'd leverage HashiCorp Vagrant↗ to create and manage the VMs, which would provide a declarative way to define what the VMs should look like. The VM (or even groups of VMs) would be specified in a single file, and I'd bypass all the tedious steps of creating the virtual hardware, attaching the installation media, installing the OS, and performing the initial configuration. Vagrant will help me build up, destroy, and redeploy a development environment in a simple and repeatable way.
Also, because I'm a bit of a sadist, I wanted to do this all on my new Framework Chromebook↗. I might as well put my 32GB of RAM to good use, right?
It took a bit of fumbling, but this article describes what it took to get a Vagrant-powered VM up and running in the Linux Development Environment↗ on my Chromebook (which is currently running ChromeOS v111 beta).
Install the prerequisites First things first: you should make sure your Chromebook supports nested virtualization. Many newer ones do, but it's not a universal thing. It's easy to check just by looking for the kvm device file:
ls -l /dev/kvm # [tl! .cmd] crw-rw---- 10,232 root 6 Feb 08:03 /dev/kvm # [tl! .nocopy] As long as you don't get an error like No such file or directory then you should be good to go.
With that out of the way, there are are a few packages which need to be installed before we can move on to the Vagrant-specific stuff. It's quite possible that these are already on your system.... but if they aren't already present you'll have a bad problem1.
sudo apt update &amp;&amp; sudo apt install \\ # [tl! .cmd] build-essential \\ gpg \\ lsb-release \\ wget I'll be configuring Vagrant to use libvirt↗ to interface with the Kernel Virtual Machine (KVM)↗ virtualization solution (rather than something like VirtualBox which would require kernel modules that can't be loaded in ChromeOS) so I'll need to install some packages for that as well:
sudo apt install virt-manager libvirt-dev # [tl! .cmd] And to avoid having to sudo each time I interact with libvirt I'll add myself to that group:
sudo gpasswd -a $USER libvirt ; newgrp libvirt # [tl! .cmd] And to avoid this issue↗ I'll make a tweak to the qemu.conf file:
echo &#34;remember_owner = 0&#34; | sudo tee -a /etc/libvirt/qemu.conf # [tl! .cmd:1] sudo systemctl restart libvirtd Update 2024-01-17
There seems to be an issue with libvirt in LXC containers on Debian Bookworm↗, which explains why I was getting errors on vagrant up after updating my Crostini environment.
The workaround is to add another line to qemu.conf:
echo &#34;namespaces = []&#34; | sudo tee -a /etc/libvirt/qemu.conf # [tl! .cmd:1] sudo systemctl restart libvirtd I'm also going to use rsync to share a synced folder↗ between the host and the VM guest so I'll need to make sure that's installed too:
sudo apt install rsync # [tl! .cmd] Install Vagrant With that out of the way, I'm ready to move on to the business of installing Vagrant. I'll start by adding the HashiCorp repository:
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg # [tl! .cmd:1] echo &#34;deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main&#34; | sudo tee /etc/apt/sources.list.d/hashicorp.list I'll then install the Vagrant package:
sudo apt update # [tl! .cmd:1] sudo apt install vagrant I also need to install the vagrant-libvirt plugin↗ so that Vagrant will know how to interact with libvirt:
vagrant plugin install vagrant-libvirt # [tl! .cmd] Create a lightweight VM Now I can get to the business of creating my first VM with Vagrant!
Vagrant VMs are distributed as Boxes, and I can browse some published Boxes at app.vagrantup.com/boxes/search?provider=libvirt↗ (applying the provider=libvirt filter so that I only see Boxes which will run on my chosen virtualization provider). For my first VM, I'll go with something light and simple: generic/alpine38↗.
So I'll create a new folder to contain the Vagrant configuration:
mkdir vagrant-alpine # [tl! .cmd:1] cd vagrant-alpine And since I'm referencing a Vagrant Box which is published on Vagrant Cloud, downloading the config is as simple as:
vagrant init generic/alpine38 # [tl! .cmd] # [tl! .nocopy:4] A \`Vagrantfile\` has been placed in this directory. You are now ready to \`vagrant up\` your first virtual environment! Please read the comments in the Vagrantfile as well as documentation on \`vagrantup.com\` for more information on using Vagrant. Before I vagrant up the joint, I do need to make a quick tweak to the default Vagrantfile, which is what tells Vagrant how to configure the VM. By default, Vagrant will try to create a synced folder using NFS and will throw a nasty error when that (inevitably2) fails. So I'll open up the Vagrantfile to review and edit it:
vim Vagrantfile # [tl! .cmd] Most of the default Vagrantfile is commented out. Here's the entirey of the configuration without the comments:
Vagrant.configure(&#34;2&#34;) do |config| config.vm.box = &#34;generic/alpine38&#34; end There's not a lot there, is there? Well I'm just going to add these two lines somewhere between the Vagrant.configure() and end lines:
Vagrant.configure(&#34;2&#34;) do |config| config.vm.box = &#34;generic/alpine38&#34; config.nfs.verify_installed = false # [tl! focus:1 highlight:1] config.vm.synced_folder &#39;.&#39;, &#39;/vagrant&#39;, type: &#39;rsync&#39; end The first line tells Vagrant not to bother checking to see if NFS is installed, and will use rsync to share the local directory with the VM guest, where it will be mounted at /vagrant.
So here's the full Vagrantfile (sans-comments3, again):
Vagrant.configure(&#34;2&#34;) do |config| config.vm.box = &#34;generic/alpine38&#34; config.nfs.verify_installed = false config.vm.synced_folder &#39;.&#39;, &#39;/vagrant&#39;, type: &#39;rsync&#39; end With that, I'm ready to fire up this VM with vagrant up! Vagrant will look inside Vagrantfile to see the config, pull down the generic/alpine38 Box from Vagrant Cloud, boot the VM, configure it so I can SSH in to it, and mount the synced folder:
vagrant up # [tl! .cmd] Bringing machine &#39;default&#39; up with &#39;libvirt&#39; provider... # [tl! .nocopy:start] ==&gt; default: Box &#39;generic/alpine38&#39; could not be found. Attempting to find and install... default: Box Provider: libvirt default: Box Version: &gt;= 0 ==&gt; default: Loading metadata for box &#39;generic/alpine38&#39; default: URL: https://vagrantcloud.com/generic/alpine38 ==&gt; default: Adding box &#39;generic/alpine38&#39; (v4.2.12) for provider: libvirt default: Downloading: https://vagrantcloud.com/generic/boxes/alpine38/versions/4.2.12/providers/libvirt.box default: Calculating and comparing box checksum... ==&gt; default: Successfully added box &#39;generic/alpine38&#39; (v4.2.12) for &#39;libvirt&#39;! ==&gt; default: Uploading base box image as volume into Libvirt storage... [...] ==&gt; default: Waiting for domain to get an IP address... ==&gt; default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 192.168.121.41:22 default: SSH username: vagrant default: SSH auth method: private key [...] default: Key inserted! Disconnecting and reconnecting using new SSH key... ==&gt; default: Machine booted and ready! ==&gt; default: Rsyncing folder: /home/john/projects/vagrant-alpine/ =&gt; /vagrant # [tl! .nocopy:end] And then I can use vagrant ssh to log in to the new VM:
vagrant ssh # [tl! .cmd:1] cat /etc/os-release NAME=&#34;Alpine Linux&#34; # [tl! .nocopy:5] ID=alpine VERSION_ID=3.8.5 PRETTY_NAME=&#34;Alpine Linux v3.8&#34; HOME_URL=&#34;http://alpinelinux.org&#34; BUG_REPORT_URL=&#34;http://bugs.alpinelinux.org&#34; I can also verify that the synced folder came through as expected:
ls -l /vagrant # [tl! .cmd] total 4 # [tl! .nocopy:1] -rw-r--r-- 1 vagrant vagrant 3117 Feb 20 15:51 Vagrantfile Once I'm finished poking at this VM, shutting it down is as easy as:
vagrant halt # [tl! .cmd] And if I want to clean up and remove all traces of the VM, that's just:
vagrant destroy # [tl! .cmd] Create a heavy VM, as a treat Having proven to myself that Vagrant does work on a Chromebook, let's see how it does with a slightly-heavier VM.... like Windows 11↗.
Space Requirement
Windows 11 makes for a pretty hefty VM which will require significant storage space. My Chromebook's Linux environment ran out of storage space the first time I attempted to deploy this guy. Fortunately ChromeOS makes it easy to allocate more space to Linux (Settings &gt; Advanced &gt; Developers &gt; Linux development environment &gt; Disk size). You'll probably need at least 30GB free to provision this VM.
Again, I'll create a new folder to hold the Vagrant configuration and do a vagrant init:
mkdir vagrant-win11 # [tl! .cmd:2] cd vagrant-win11 vagrant init oopsme/windows11-22h2 And, again, I'll edit the Vagrantfile before starting the VM. This time, though, I'm adding a few configuration options to tell libvirt that I'd like more compute resources than the default 1 CPU and 512MB RAM4:
Vagrant.configure(&#34;2&#34;) do |config| config.vm.box = &#34;oopsme/windows11-22h2&#34; config.vm.provider :libvirt do |libvirt| libvirt.cpus = 4 # [tl! highlight:1] libvirt.memory = 4096 end end Now it's time to bring it up. This one's going to take A While as it syncs the ~12GB Box first.
vagrant up # [tl! .cmd] Eventually it should spit out that lovely Machine booted and ready! message and I can log in! I can do a vagrant ssh again to gain a shell in the Windows environment, but I'll probably want to interact with those sweet sweet graphics. That takes a little bit more effort.
First, I'll use virsh -c qemu:///system list to see the running VM(s):
virsh -c qemu:///system list # [tl! .cmd] Id Name State # [tl! .nocopy:2] --------------------------------------- 10 vagrant-win11_default running Then I can tell virt-viewer that I'd like to attach a session there:
virt-viewer -c qemu:///system -a vagrant-win11_default # [tl! .cmd] I log in with the default password vagrant, and I'm in Windows 11 land! Next steps Well that about does it for a proof-of-concept. My next steps will be exploring multi-machine Vagrant environments↗ to create a portable lab environment including machines running several different operating systems so that I can learn how to manage them effectively with Salt. It should be fun!
and will not go to space today↗.&#160;&#x21a9;&#xfe0e;
NFS doesn't work properly from within an LXD container, like the ChromeOS Linux development environment.&#160;&#x21a9;&#xfe0e;
Through the magic of egrep -v &quot;^\\s*(#|$)&quot; $file.&#160;&#x21a9;&#xfe0e;
Note here that libvirt.memory is specified in MB. Windows 11 boots happily with 4096 MB of RAM.... and somewhat less so with just 4 MB. Ask me how I know...&#160;&#x21a9;&#xfe0e;
`,description:"Pairing the powerful Linux Development Environment on modern Chromebooks with HashiCorp Vagrant to create and manage local virtual machines for development and testing",url:"https://runtimeterror.dev/create-vms-chromebook-hashicorp-vagrant/"},"https://runtimeterror.dev/psa-microsoft-kb5022842-breaks-ws2022-secure-boot/":{title:"PSA: Microsoft's KB5022842 breaks Windows Server 2022 VMs with Secure Boot",tags:["vmware","powershell","windows","powercli"],content:` Fix available
VMware has released a fix for this problem in the form of ESXi 7.0 Update 3k↗:
If you already face the issue, after patching the host to ESXi 7.0 Update 3k, just power on the affected Windows Server 2022 VMs. After you patch a host to ESXi 7.0 Update 3k, you can migrate a running Windows Server 2022 VM from a host of version earlier than ESXi 7.0 Update 3k, install KB5022842, and the VM boots properly without any additional steps required.
Microsoft released a patch↗ this week for Windows Server 2022 that might cause some big problems↗ in VMware environments. Per VMware's KB90947↗:
After installing Windows Server 2022 update KB5022842 (OS Build 20348.1547), guest OS can not boot up when virtual machine(s) configured with secure boot enabled running on vSphere ESXi 6.7 U2/U3 or vSphere ESXi 7.0.x.
Currently there is no resolution for virtual machines running on vSphere ESXi 6.7 U2/U3 and vSphere ESXi 7.0.x. However the issue doesn't exist with virtual machines running on vSphere ESXi 8.0.x.
So yeah. That's, uh, not great.
If you've got any Windows Server 2022 VMs with Secure Boot↗ enabled on ESXi 6.7/7.x, you'll want to make sure they do not get KB5022842 until this problem is resolved.
I put together a quick PowerCLI query to help identify impacted VMs in my environment:
# torchlight! {&#34;lineNumbers&#34;: true} $secureBoot2022VMs = foreach($datacenter in (Get-Datacenter)) { $datacenter | Get-VM | Where-Object {$_.Guest.OsFullName -Match &#39;Microsoft Windows Server 2022&#39; -And $_.ExtensionData.Config.BootOptions.EfiSecureBootEnabled} | Select-Object @{N=&#34;Datacenter&#34;;E={$datacenter.Name}}, Name, @{N=&#34;Running OS&#34;;E={$_.Guest.OsFullName}}, @{N=&#34;Secure Boot&#34;;E={$_.ExtensionData.Config.BootOptions.EfiSecureBootEnabled}}, PowerState } $secureBoot2022VMs | Export-Csv -NoTypeInformation -Path ./secureBoot2022VMs.csv Be careful out there!
`,description:"Quick warning about a problematic patch from Microsoft, and a PowerCLI script to expose the potential impact in your vSphere environment.",url:"https://runtimeterror.dev/psa-microsoft-kb5022842-breaks-ws2022-secure-boot/"},"https://runtimeterror.dev/tailscale-golink-private-shortlinks-tailnet/":{title:"Tailscale golink: Private Shortlinks for your Tailnet",tags:["docker","vpn","tailscale","wireguard","containers","selfhosting"],content:`I've shared in the past about how I use custom search engines in Chrome as quick web shortcuts. And I may have mentioned my love for Tailscale a time or two as well. Well I recently learned of a way to combine these two passions: Tailscale golink↗. The golink announcement post on the Tailscale blog↗ offers a great overview of the service:
Using golink, you can create and share simple go/name links for commonly accessed websites, so that anyone in your network can access them no matter the device they’re on — without requiring browser extensions or fiddling with DNS settings. And because golink integrates with Tailscale, links are private to users in your tailnet without any separate user management, logins, or security policies.
And these go links don't have to be simply static shortcuts either; they can also conditionally insert text into the target URL. That lets the shortcuts work similarly to my custom search engines in Chrome, but they are available on any device in my tailnet rather than just those that run Chrome. The shortcuts even work from command-line utilities like curl, provided that you pass a flag like -L to follow redirects. Sounds great - but how do you actually make golink available on your tailnet? Well, here's what I did to deploy the golink Docker image↗ on a Photon OS VM I set up running on my Quartz64 running ESXi-ARM.
Tailnet prep There are three things I'll need to do in the Tailscale admin portal before moving on:
Create an ACL tag I assign ACL tags to devices in my tailnet based on their location and/or purpose, and I'm then able to use those in a policy to restrict access between certain devices. To that end, I'm going to create a new tag:golink tag for this purpose. Creating a new tag in Tailscale is really just going to the Access Controls page of the admin console↗ and editing the policy to specify a tagOwner who is permitted to assign the tag:
&#34;groups&#34;: &#34;group:admins&#34;: [&#34;[email protected]&#34;], }, &#34;tagOwners&#34;: { &#34;tag:home&#34;: [&#34;group:admins&#34;], &#34;tag:cloud&#34;: [&#34;group:admins&#34;], &#34;tag:client&#34;: [&#34;group:admins&#34;], &#34;tag:dns&#34;: [&#34;group:admins&#34;], &#34;tag:rsync&#34;: [&#34;group:admins&#34;], &#34;tag:funnel&#34;: [&#34;group:admins&#34;], &#34;tag:golink&#34;: [&#34;group:admins&#34;], // [tl! highlight] }, Configure ACL access This step is really only necessary since I've altered the default Tailscale ACL and prevent my nodes from communicating with each other unless specifically permitted. I want to make sure that everything on my tailnet can access golink:
&#34;acls&#34;: [ { // make golink accessible to everything &#34;action&#34;: &#34;accept&#34;, &#34;users&#34;: [&#34;*&#34;], &#34;ports&#34;: [ &#34;tag:golink:80&#34;, ], }, ... ], Create an auth key The last prerequisite task is to create a new authentication key that the golink container can use to log in to Tailscale since I won't be running tailscale interactively. This can easily be done from the Settings page↗. I'll go ahead and set the key to expire in 1 day (since I'm going to use it in just a moment), make sure that the Ephemeral option is disabled (since I don't want the new node to lose its authorization once it disconnects), and associate it with my new tag:golink tag.
Applying that tag does two things for me: (1) it makes it easy to manage access with the ACL policy file edited above, and (2) it automatically sets it so that the node's token won't automatically expire. Once it's auth'd and connected to my tailnet, it'll stay there.
After clicking the Generate key button, the key will be displayed. This is the only time it will be visible so be sure to copy it somewhere safe!
Docker setup The golink repo↗ offers this command for running the container:
docker run -it --rm ghcr.io/tailscale/golink:main # [tl! .cmd] The doc also indicates that I can pass the auth key to the golink service via the TS_AUTHKEY environment variable, and that all the configuration will be stored in /home/nonroot (which will be owned by uid/gid 65532). I'll take this knowledge and use it to craft a docker-compose.yml to simplify container management.
mkdir -p golink/data # [tl! .cmd:3] cd golink chmod 65532:65532 data vi docker-compose.yaml # torchlight! {&#34;lineNumbers&#34;: true} # golink docker-compose.yaml version: &#39;3&#39; services: golink: container_name: golink restart: unless-stopped image: ghcr.io/tailscale/golink:main volumes: - &#39;./data:/home/nonroot&#39; environment: - TS_AUTHKEY=MY_TS_AUTHKEY I can then start the container with sudo docker-compose up -d, and check the Tailscale admin console to see that the new machine was registered successfully: And I can point a web browser to go/ and see the (currently-empty) landing page: Security cleanup!
The TS_AUTHKEY is only needed for this initial authentication; now that the container is connected to my Tailnet I can remove that line from the docker-compose.yaml file to avoid having a sensitive credential hanging around. Future (re)starts of the container will use the token stored in the golink database.
Get go'ing Getting started with golink is pretty simple - just enter a shortname and a destination: So now when I enter go/vcenter it will automatically take me to the vCenter in my homelab. That's handy... but we can do better. You see, golink also supports Go template syntax, which allows it to behave a bit like those custom search engines I mentioned earlier.
I can go to go/.detail/LINK_NAME to edit the link, so I hit up go/.detail/vcenter and add a bit to the target URL:
https://vcsa.lab.bowdre.net/ui/{{with .Path}}app/search?query={{.}}&amp;searchType=simple{{end}} Now if I just enter go/vcenter I will go to the vSphere UI, while if I enter something like go/vcenter/vm_name I will instead be taken directly to the corresponding search results.
Some of my other golinks:
Shortlink Destination URL Description code https://github.com/search?type=code&amp;q=user:jbowdre{{with .Path}}+{{.}}{{end}} searches my code on Github ipam https://ipam.lab.bowdre.net/{{with .Path}}tools/search/{{.}}{{end}} searches my lab phpIPAM instance pdb https://www.protondb.com/{{with .Path}}search?q={{.}}{{end}} searches protondb tailnet https://login.tailscale.com/admin/machines?q={{.Path}} searches my Tailscale admin panel for a machine name sho https://www.shodan.io/{{with .Path}}search?query={{.}}{{end}} searches Shodan for interesting internet-connected systems randpass https://www.random.org/passwords/?num=1\\u0026len=24\\u0026format=plain\\u0026rnd=new generates a random 24-character string suitable for use as a password (curl-friendly) wx https://wttr.in/{{ .Path }} local weather report based on geolocation or weather for a designated city (curl-friendly) Back up and restore You can browse to go/.export to see a JSON-formatted listing of all configured shortcuts - or, if you're clever, you could do something like curl http://go/.export -o links.json to download a copy.
To restore, just pass --snapshot /path/to/links.json when starting golink. What I usually do is copy the file into the data folder that I'm mounting as a Docker volume, and then just run:
sudo docker exec golink /golink --sqlitedb /home/nonroot/golink.db --snapshot /home/nonroot/links.json # [tl! .cmd] Conclusion This little golink utility has been pretty handy on my Tailnet so far. It seems so simple, but I'm really impressed by how well it works. If you happen to try it out, I'd love to hear how you're putting it to use.
`,description:"How to deploy Tailscale's golink service in a Docker container.",url:"https://runtimeterror.dev/tailscale-golink-private-shortlinks-tailnet/"},"https://runtimeterror.dev/tailscale-on-vmware-photon/":{title:"Tailscale on VMware Photon OS",tags:["vmware","linux","wireguard","networking","security","tailscale"],content:`You might remember that I'm a pretty big fan of Tailscale↗, which makes it easy to connect your various devices together in a secure tailnet↗, or private network. Tailscale is super simple to set up on most platforms, but you'll need to install it manually↗ if there isn't a prebuilt package for your system.
Here's a condensed list of the steps that I took to manually install Tailscale on VMware's Photon OS↗, though the same (or similar) steps should also work on just about any other systemd-based system.
Visit https://pkgs.tailscale.com/stable/#static↗ to see the latest stable version for your system architecture, and copy the URL. For instance, I'll be using https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz. Download and extract it to the system: wget https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz # [tl! .cmd:2] tar xvf tailscale_1.34.1_arm64.tgz cd tailscale_1.34.1_arm64/ Install the binaries and service files: sudo install -m 755 tailscale /usr/bin/ # [tl! .cmd:4] sudo install -m 755 tailscaled /usr/sbin/ sudo install -m 644 systemd/tailscaled.defaults /etc/default/tailscaled sudo install -m 644 systemd/tailscaled.service /usr/lib/systemd/system/ Start the service: sudo systemctl enable tailscaled # [tl! .cmd:1] sudo systemctl start tailscaled From that point, just sudo tailscale up↗ like normal.
Updating Tailscale
Since Tailscale was installed outside of any package manager, it won't get updated automatically. When new versions are released you'll need to update it manually. To do that:
Download and extract the new version. Install the tailscale and tailscaled binaries as described above (no need to install the service files again). Restart the service with sudo systemctl restart tailscaled. `,description:"How to manually install Tailscale on VMware's Photon OS - or any other systemd-based platform without official Tailscale packages.",url:"https://runtimeterror.dev/tailscale-on-vmware-photon/"},"https://runtimeterror.dev/k8s-on-vsphere-node-template-with-packer/":{title:"K8s on vSphere: Building a Kubernetes Node Template With Packer",tags:["vmware","linux","shell","automation","kubernetes","containers","iac","packer"],content:`I've been leveraging the open-source Tanzu Community Edition Kubernetes distribution for a little while now, both in my home lab and at work, so I was disappointed to learn that VMware was abandoning the project↗. TCE had been a pretty good fit for my needs, and now I needed to search for a replacement. VMware is offering a free version of Tanzu Kubernetes Grid as a replacement, but it comes with a license solely for non-commercial use so I wouldn't be able to use it at work. And I'd really like to use the same solution in both environments to make development and testing easier on me.
There are a bunch of great projects for running Kubernetes in development/lab environments, and others optimized for much larger enterprise environments, but I struggled to find a product that felt like a good fit for both in the way TCE was. My workloads are few and pretty simple so most enterprise K8s variants (Tanzu included) would feel like overkill, but I do need to ensure everything remains highly-available in the data centers at work.
Plus, I thought it would be a fun learning experience to roll my own Kubernetes on vSphere!
In the next couple of posts, I'll share the details of how I'm using Terraform to provision production-ready vanilla Kubernetes clusters on vSphere (complete with the vSphere Container Storage Interface plugin!) in a consistent and repeatable way. I also plan to document one of the ways I'm leveraging these clusters, which is using them as a part of a Gitlab CI/CD pipeline to churn out weekly VM template builds so I never again have to worry about my templates being out of date.
I have definitely learned a ton in the process (and still have a lot more to learn), but today I'll start by describing how I'm leveraging Packer to create a single VM template ready to enter service as a Kubernetes compute node.
What's Packer, and why? HashiCorp Packer↗ is a free open-source tool designed to create consistent, repeatable machine images. It's pretty killer as a part of a CI/CD pipeline to kick off new builds based on a schedule or code commits, but also works great for creating builds on-demand. Packer uses the HashiCorp Configuration Language (HCL)↗ to describe all of the properties of a VM build in a concise and readable format.
You might ask why I would bother with using a powerful tool like Packer if I'm just going to be building a single template. Surely I could just do that by hand, right? And of course, you'd be right - but using an Infrastructure as Code tool even for one-off builds has some pretty big advantages.
It's fast. Packer is able to build a complete VM (including pulling in all available OS and software updates) in just a few minutes, much faster than I could click through an installer on my own. It's consistent. Packer will follow the exact same steps for every build, removing the small variations (and typos!) that would surely show up if I did the builds manually. It's great for testing changes. Since Packer builds are so fast and consistent, it makes it incredibly easy to test changes as I go. I can be confident that the only changes between two builds will be the changes I deliberately introduced. It's self-documenting. The entire VM (and its guest OS) is described completely within the Packer HCL file(s), which I can review to remember which packages were installed, which user account(s) were created, what partition scheme was used, and anything else I might need to know. It supports change tracking. A Packer build is just a set of HCL files so it's easy to sync them with a version control system like Git to track (and revert) changes as needed. Packer is also extremely versatile, and a broad set of external plugins↗ expand its capabilities to support creating machines for basically any environment. For my needs, I'll be utilizing the vsphere-iso↗ builder which uses the vSphere API to remotely build VMs directly on the hypervisors.
Sounds pretty cool, right? I'm not going to go too deep into &quot;how to Packer&quot; in this post, but HashiCorp does provide some pretty good tutorials↗ to help you get started.
Prerequisites Install Packer Before being able to use Packer, you have to install it. On Debian/Ubuntu Linux, this process consists of adding the HashiCorp GPG key and software repository, and then simply installing the package:
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - # [tl! .cmd:2] sudo apt-add-repository &#34;deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main&#34; sudo apt-get update &amp;&amp; sudo apt-get install packer You can learn how to install Packer on other systems by following this tutorial from HashiCorp↗.
Configure privileges Packer will need a user account with sufficient privileges in the vSphere environment to be able to create and manage a VM. I'd recommend using an account dedicated to automation tasks, and assigning it the required privileges listed in the vsphere-iso documentation↗.
Gather installation media My Kubernetes node template will use Ubuntu 20.04 LTS as the OS so I'll go ahead and download the server installer ISO↗ and upload it to a vSphere datastore to make it available to Packer.
Template build After the OS is installed and minimimally configured, I'll need to add in Kubernetes components like containerd, kubectl, kubelet, and kubeadm, and then apply a few additional tweaks to get it fully ready.
You can see the entirety of my Packer configuration on GitHub↗, but I'll talk through each file as we go along.
File/folder layout After quite a bit of experimentation, I've settled on a preferred way to organize my Packer build files. I've found that this structure makes the builds modular enough that it's easy to reuse components in other builds, but still consolidated enough to be easily manageable. This layout is, of course, largely subjective - it's just what works well for me:
. ├── certs │ ├── ca.cer ├── data │ ├── meta-data │ └── user-data.pkrtpl.hcl ├── scripts │ ├── cleanup-cloud-init.sh │ ├── cleanup-subiquity.sh │ ├── configure-sshd.sh │ ├── disable-multipathd.sh │ ├── disable-release-upgrade-motd.sh │ ├── enable-vmware-customization.sh │ ├── generalize.sh │ ├── install-ca-certs.sh │ ├── install-k8s.sh │ ├── persist-cloud-init-net.sh │ ├── update-packages.sh │ ├── wait-for-cloud-init.sh │ └── zero-disk.sh ├── ubuntu-k8s.auto.pkrvars.hcl ├── ubuntu-k8s.pkr.hcl └── variables.pkr.hcl The certs folder holds the Base64-encoded PEM-formatted certificate of my internal Certificate Authority which will be automatically installed in the provisioned VM's trusted certificate store. The data folder stores files for generating the cloud-init configuration that will automate the OS installation and configuration. The scripts directory holds a collection of scripts used for post-install configuration tasks. Sure, I could just use a single large script, but using a bunch of smaller ones helps keep things modular and easy to reuse elsewhere. variables.pkr.hcl declares all of the variables which will be used in the Packer build, and sets the default values for some of them. ubuntu-k8s.auto.pkrvars.hcl assigns values to those variables. This is where most of the user-facing options will be configured, such as usernames, passwords, and environment settings. ubuntu-k8s.pkr.hcl is where the build process is actually described. Let's quickly run through that build process, and then I'll back up and examine some other components in detail.
ubuntu-k8s.pkr.hcl packer block The first block in the file tells Packer about the minimum version requirements for Packer as well as the external plugins used for the build:
// torchlight! {&#34;lineNumbers&#34;: true} // BLOCK: packer // The Packer configuration. packer { required_version = &#34;&gt;= 1.8.2&#34; required_plugins { vsphere = { version = &#34;&gt;= 1.0.8&#34; source = &#34;github.com/hashicorp/vsphere&#34; } sshkey = { version = &#34;&gt;= 1.0.3&#34; source = &#34;github.com/ivoronin/sshkey&#34; } } } As I mentioned above, I'll be using the official vsphere plugin↗ to handle the provisioning on my vSphere environment. I'll also make use of the sshkey plugin↗ to dynamically generate SSH keys for the build process.
data block This section would be used for loading information from various data sources, but I'm only using it for the sshkey plugin (as mentioned above).
// torchlight! {&#34;lineNumbers&#34;: true} // BLOCK: data // Defines data sources. data &#34;sshkey&#34; &#34;install&#34; { type = &#34;ed25519&#34; name = &#34;packer_key&#34; } This will generate an ECDSA keypair, and the public key will include the identifier packer_key to make it easier to manage later on. Using this plugin to generate keys means that I don't have to worry about storing a private key somewhere in the build directory.
locals block Locals are a type of Packer variable which aren't explicitly declared in the variables.pkr.hcl file. They only exist within the context of a single build (hence the &quot;local&quot; name). Typical Packer variables are static and don't support string manipulation; locals, however, do support expressions that can be used to change their value on the fly. This makes them very useful when you need to combine variables into a single string or concatenate lists of SSH public keys (such as in the highlighted lines):
// torchlight! {&#34;lineNumbers&#34;: true} // BLOCK: locals // Defines local variables. locals { ssh_public_key = data.sshkey.install.public_key ssh_private_key_file = data.sshkey.install.private_key_path build_tool = &#34;HashiCorp Packer \${packer.version}&#34; build_date = formatdate(&#34;YYYY-MM-DD hh:mm ZZZ&#34;, timestamp()) build_description = &#34;Kubernetes Ubuntu 20.04 Node template\\nBuild date: \${local.build_date}\\nBuild tool: \${local.build_tool}&#34; iso_paths = [&#34;[\${var.common_iso_datastore}] \${var.iso_path}/\${var.iso_file}&#34;] iso_checksum = &#34;\${var.iso_checksum_type}:\${var.iso_checksum_value}&#34; data_source_content = { &#34;/meta-data&#34; = file(&#34;data/meta-data&#34;) &#34;/user-data&#34; = templatefile(&#34;data/user-data.pkrtpl.hcl&#34;, { build_username = var.build_username build_password = bcrypt(var.build_password) ssh_keys = concat([local.ssh_public_key], var.ssh_keys) vm_guest_os_language = var.vm_guest_os_language vm_guest_os_keyboard = var.vm_guest_os_keyboard vm_guest_os_timezone = var.vm_guest_os_timezone vm_guest_os_hostname = var.vm_name apt_mirror = var.cloud_init_apt_mirror apt_packages = var.cloud_init_apt_packages }) } } This block also makes use of the built-in templatefile() function↗ to insert build-specific variables into the user-data file for cloud-init↗ (more on that in a bit).
source block The source block tells the vsphere-iso builder how to connect to vSphere, what hardware specs to set on the VM, and what to do with the VM once the build has finished (convert it to template, export it to OVF, and so on).
You'll notice that most of this is just mapping user-defined variables (with the var. prefix) to properties used by vsphere-iso:
// torchlight! {&#34;lineNumbers&#34;: true} // BLOCK: source // Defines the builder configuration blocks. source &#34;vsphere-iso&#34; &#34;ubuntu-k8s&#34; { // vCenter Server Endpoint Settings and Credentials vcenter_server = var.vsphere_endpoint username = var.vsphere_username password = var.vsphere_password insecure_connection = var.vsphere_insecure_connection // vSphere Settings datacenter = var.vsphere_datacenter cluster = var.vsphere_cluster datastore = var.vsphere_datastore folder = var.vsphere_folder // Virtual Machine Settings vm_name = var.vm_name vm_version = var.common_vm_version guest_os_type = var.vm_guest_os_type firmware = var.vm_firmware CPUs = var.vm_cpu_count cpu_cores = var.vm_cpu_cores CPU_hot_plug = var.vm_cpu_hot_add RAM = var.vm_mem_size RAM_hot_plug = var.vm_mem_hot_add cdrom_type = var.vm_cdrom_type remove_cdrom = var.common_remove_cdrom disk_controller_type = var.vm_disk_controller_type storage { disk_size = var.vm_disk_size disk_thin_provisioned = var.vm_disk_thin_provisioned } network_adapters { network = var.vsphere_network network_card = var.vm_network_card } tools_upgrade_policy = var.common_tools_upgrade_policy notes = local.build_description configuration_parameters = { &#34;devices.hotplug&#34; = &#34;FALSE&#34; } // Removable Media Settings iso_url = var.iso_url iso_paths = local.iso_paths iso_checksum = local.iso_checksum cd_content = local.data_source_content cd_label = var.cd_label // Boot and Provisioning Settings boot_order = var.vm_boot_order boot_wait = var.vm_boot_wait boot_command = var.vm_boot_command ip_wait_timeout = var.common_ip_wait_timeout shutdown_command = var.vm_shutdown_command shutdown_timeout = var.common_shutdown_timeout // Communicator Settings and Credentials communicator = &#34;ssh&#34; ssh_username = var.build_username ssh_private_key_file = local.ssh_private_key_file ssh_clear_authorized_keys = var.build_remove_keys ssh_port = var.communicator_port ssh_timeout = var.communicator_timeout // Snapshot Settings create_snapshot = var.common_snapshot_creation snapshot_name = var.common_snapshot_name // Template and Content Library Settings convert_to_template = var.common_template_conversion dynamic &#34;content_library_destination&#34; { for_each = var.common_content_library_name != null ? [1] : [] content { library = var.common_content_library_name description = local.build_description ovf = var.common_content_library_ovf destroy = var.common_content_library_destroy skip_import = var.common_content_library_skip_export } } // OVF Export Settings dynamic &#34;export&#34; { for_each = var.common_ovf_export_enabled == true ? [1] : [] content { name = var.vm_name force = var.common_ovf_export_overwrite options = [ &#34;extraconfig&#34; ] output_directory = &#34;\${var.common_ovf_export_path}/\${var.vm_name}&#34; } } } build block This block brings everything together and executes the build. It calls the source.vsphere-iso.ubuntu-k8s block defined above, and also ties in a file and a few shell provisioners. file provisioners are used to copy files (like SSL CA certificates) into the VM, while the shell provisioners run commands and execute scripts. Those will be handy for the post-deployment configuration tasks, like updating and installing packages.
// torchlight! {&#34;lineNumbers&#34;: true} // BLOCK: build // Defines the builders to run, provisioners, and post-processors. build { sources = [ &#34;source.vsphere-iso.ubuntu-k8s&#34; ] provisioner &#34;file&#34; { source = &#34;certs&#34; destination = &#34;/tmp&#34; } provisioner &#34;shell&#34; { execute_command = &#34;export KUBEVERSION=\${var.k8s_version}; bash {{ .Path }}&#34; expect_disconnect = true environment_vars = [ &#34;KUBEVERSION=\${var.k8s_version}&#34; ] scripts = var.post_install_scripts } provisioner &#34;shell&#34; { execute_command = &#34;bash {{ .Path }}&#34; expect_disconnect = true scripts = var.pre_final_scripts } } So you can see that the ubuntu-k8s.pkr.hcl file primarily focuses on the structure and form of the build, and it's written in such a way that it can be fairly easily adapted for building other types of VMs. Very few things in this file would have to be changed since so many of the properties are derived from the variables.
You can view the full file here↗.
variables.pkr.hcl Before looking at the build-specific variable definitions, let's take a quick look at the variables I've told Packer that I intend to use. After all, Packer requires that variables be declared before they can be used.
Most of these carry descriptions with them so I won't restate them outside of the code block here:
// torchlight! {&#34;lineNumbers&#34;: true} /* DESCRIPTION: Ubuntu Server 20.04 LTS variables using the Packer Builder for VMware vSphere (vsphere-iso). */ // BLOCK: variable // Defines the input variables. // vSphere Credentials variable &#34;vsphere_endpoint&#34; { type = string description = &#34;The fully qualified domain name or IP address of the vCenter Server instance. (&#39;vcenter.lab.local&#39;)&#34; } variable &#34;vsphere_username&#34; { type = string description = &#34;The username to login to the vCenter Server instance. (&#39;packer&#39;)&#34; sensitive = true } variable &#34;vsphere_password&#34; { type = string description = &#34;The password for the login to the vCenter Server instance.&#34; sensitive = true } variable &#34;vsphere_insecure_connection&#34; { type = bool description = &#34;Do not validate vCenter Server TLS certificate.&#34; default = true } // vSphere Settings variable &#34;vsphere_datacenter&#34; { type = string description = &#34;The name of the target vSphere datacenter. (&#39;Lab Datacenter&#39;)&#34; } variable &#34;vsphere_cluster&#34; { type = string description = &#34;The name of the target vSphere cluster. (&#39;cluster-01&#39;)&#34; } variable &#34;vsphere_datastore&#34; { type = string description = &#34;The name of the target vSphere datastore. (&#39;datastore-01&#39;)&#34; } variable &#34;vsphere_network&#34; { type = string description = &#34;The name of the target vSphere network. (&#39;network-192.168.1.0&#39;)&#34; } variable &#34;vsphere_folder&#34; { type = string description = &#34;The name of the target vSphere folder. (&#39;_Templates&#39;)&#34; } // Virtual Machine Settings variable &#34;vm_name&#34; { type = string description = &#34;Name of the new VM to create.&#34; } variable &#34;vm_guest_os_language&#34; { type = string description = &#34;The guest operating system lanugage.&#34; default = &#34;en_US&#34; } variable &#34;vm_guest_os_keyboard&#34; { type = string description = &#34;The guest operating system keyboard input.&#34; default = &#34;us&#34; } variable &#34;vm_guest_os_timezone&#34; { type = string description = &#34;The guest operating system timezone.&#34; default = &#34;UTC&#34; } variable &#34;vm_guest_os_type&#34; { type = string description = &#34;The guest operating system type. (&#39;ubuntu64Guest&#39;)&#34; } variable &#34;vm_firmware&#34; { type = string description = &#34;The virtual machine firmware. (&#39;efi-secure&#39;. &#39;efi&#39;, or &#39;bios&#39;)&#34; default = &#34;efi-secure&#34; } variable &#34;vm_cdrom_type&#34; { type = string description = &#34;The virtual machine CD-ROM type. (&#39;sata&#39;, or &#39;ide&#39;)&#34; default = &#34;sata&#34; } variable &#34;vm_cpu_count&#34; { type = number description = &#34;The number of virtual CPUs. (&#39;2&#39;)&#34; } variable &#34;vm_cpu_cores&#34; { type = number description = &#34;The number of virtual CPUs cores per socket. (&#39;1&#39;)&#34; } variable &#34;vm_cpu_hot_add&#34; { type = bool description = &#34;Enable hot add CPU.&#34; default = true } variable &#34;vm_mem_size&#34; { type = number description = &#34;The size for the virtual memory in MB. (&#39;2048&#39;)&#34; } variable &#34;vm_mem_hot_add&#34; { type = bool description = &#34;Enable hot add memory.&#34; default = true } variable &#34;vm_disk_size&#34; { type = number description = &#34;The size for the virtual disk in MB. (&#39;61440&#39; = 60GB)&#34; default = 61440 } variable &#34;vm_disk_controller_type&#34; { type = list(string) description = &#34;The virtual disk controller types in sequence. (&#39;pvscsi&#39;)&#34; default = [&#34;pvscsi&#34;] } variable &#34;vm_disk_thin_provisioned&#34; { type = bool description = &#34;Thin provision the virtual disk.&#34; default = true } variable &#34;vm_disk_eagerly_scrub&#34; { type = bool description = &#34;Enable VMDK eager scrubbing for VM.&#34; default = false } variable &#34;vm_network_card&#34; { type = string description = &#34;The virtual network card type. (&#39;vmxnet3&#39; or &#39;e1000e&#39;)&#34; default = &#34;vmxnet3&#34; } variable &#34;common_vm_version&#34; { type = number description = &#34;The vSphere virtual hardware version. (e.g. &#39;19&#39;)&#34; } variable &#34;common_tools_upgrade_policy&#34; { type = bool description = &#34;Upgrade VMware Tools on reboot.&#34; default = true } variable &#34;common_remove_cdrom&#34; { type = bool description = &#34;Remove the virtual CD-ROM(s).&#34; default = true } // Template and Content Library Settings variable &#34;common_template_conversion&#34; { type = bool description = &#34;Convert the virtual machine to template. Must be &#39;false&#39; for content library.&#34; default = false } variable &#34;common_content_library_name&#34; { type = string description = &#34;The name of the target vSphere content library, if used. (&#39;Lab-CL&#39;)&#34; default = null } variable &#34;common_content_library_ovf&#34; { type = bool description = &#34;Export to content library as an OVF template.&#34; default = false } variable &#34;common_content_library_destroy&#34; { type = bool description = &#34;Delete the virtual machine after exporting to the content library.&#34; default = true } variable &#34;common_content_library_skip_export&#34; { type = bool description = &#34;Skip exporting the virtual machine to the content library. Option allows for testing / debugging without saving the machine image.&#34; default = false } // Snapshot Settings variable &#34;common_snapshot_creation&#34; { type = bool description = &#34;Create a snapshot for Linked Clones.&#34; default = false } variable &#34;common_snapshot_name&#34; { type = string description = &#34;Name of the snapshot to be created if create_snapshot is true.&#34; default = &#34;Created By Packer&#34; } // OVF Export Settings variable &#34;common_ovf_export_enabled&#34; { type = bool description = &#34;Enable OVF artifact export.&#34; default = false } variable &#34;common_ovf_export_overwrite&#34; { type = bool description = &#34;Overwrite existing OVF artifact.&#34; default = true } variable &#34;common_ovf_export_path&#34; { type = string description = &#34;Folder path for the OVF export.&#34; } // Removable Media Settings variable &#34;common_iso_datastore&#34; { type = string description = &#34;The name of the source vSphere datastore for ISO images. (&#39;datastore-iso-01&#39;)&#34; } variable &#34;iso_url&#34; { type = string description = &#34;The URL source of the ISO image. (&#39;https://releases.ubuntu.com/20.04.5/ubuntu-20.04.5-live-server-amd64.iso&#39;)&#34; } variable &#34;iso_path&#34; { type = string description = &#34;The path on the source vSphere datastore for ISO image. (&#39;ISOs/Linux&#39;)&#34; } variable &#34;iso_file&#34; { type = string description = &#34;The file name of the ISO image used by the vendor. (&#39;ubuntu-20.04.5-live-server-amd64.iso&#39;)&#34; } variable &#34;iso_checksum_type&#34; { type = string description = &#34;The checksum algorithm used by the vendor. (&#39;sha256&#39;)&#34; } variable &#34;iso_checksum_value&#34; { type = string description = &#34;The checksum value provided by the vendor.&#34; } variable &#34;cd_label&#34; { type = string description = &#34;CD Label&#34; default = &#34;cidata&#34; } // Boot Settings variable &#34;vm_boot_order&#34; { type = string description = &#34;The boot order for virtual machines devices. (&#39;disk,cdrom&#39;)&#34; default = &#34;disk,cdrom&#34; } variable &#34;vm_boot_wait&#34; { type = string description = &#34;The time to wait before boot.&#34; } variable &#34;vm_boot_command&#34; { type = list(string) description = &#34;The virtual machine boot command.&#34; default = [] } variable &#34;vm_shutdown_command&#34; { type = string description = &#34;Command(s) for guest operating system shutdown.&#34; default = null } variable &#34;common_ip_wait_timeout&#34; { type = string description = &#34;Time to wait for guest operating system IP address response.&#34; } variable &#34;common_shutdown_timeout&#34; { type = string description = &#34;Time to wait for guest operating system shutdown.&#34; } // Communicator Settings and Credentials variable &#34;build_username&#34; { type = string description = &#34;The username to login to the guest operating system. (&#39;admin&#39;)&#34; } variable &#34;build_password&#34; { type = string description = &#34;The password to login to the guest operating system.&#34; sensitive = true } variable &#34;build_password_encrypted&#34; { type = string description = &#34;The encrypted password to login the guest operating system.&#34; sensitive = true default = null } variable &#34;ssh_keys&#34; { type = list(string) description = &#34;List of public keys to be added to ~/.ssh/authorized_keys.&#34; sensitive = true default = [] } variable &#34;build_remove_keys&#34; { type = bool description = &#34;If true, Packer will attempt to remove its temporary key from ~/.ssh/authorized_keys and /root/.ssh/authorized_keys&#34; default = true } // Communicator Settings variable &#34;communicator_port&#34; { type = string description = &#34;The port for the communicator protocol.&#34; } variable &#34;communicator_timeout&#34; { type = string description = &#34;The timeout for the communicator protocol.&#34; } variable &#34;communicator_insecure&#34; { type = bool description = &#34;If true, do not check server certificate chain and host name&#34; default = true } variable &#34;communicator_ssl&#34; { type = bool description = &#34;If true, use SSL&#34; default = true } // Provisioner Settings variable &#34;cloud_init_apt_packages&#34; { type = list(string) description = &#34;A list of apt packages to install during the subiquity cloud-init installer.&#34; default = [] } variable &#34;cloud_init_apt_mirror&#34; { type = string description = &#34;Sets the default apt mirror during the subiquity cloud-init installer.&#34; default = &#34;&#34; } variable &#34;post_install_scripts&#34; { type = list(string) description = &#34;A list of scripts and their relative paths to transfer and run after OS install.&#34; default = [] } variable &#34;pre_final_scripts&#34; { type = list(string) description = &#34;A list of scripts and their relative paths to transfer and run before finalization.&#34; default = [] } // Kubernetes Settings variable &#34;k8s_version&#34; { type = string description = &#34;Kubernetes version to be installed. Latest stable is listed at https://dl.k8s.io/release/stable.txt&#34; default = &#34;1.25.3&#34; } The full variables.pkr.hcl can be viewed here↗.
ubuntu-k8s.auto.pkrvars.hcl Packer automatically knows to load variables defined in files ending in *.auto.pkrvars.hcl. Storing the variable values separately from the declarations in variables.pkr.hcl makes it easier to protect sensitive values.
So I'll start by telling Packer what credentials to use for connecting to vSphere, and what vSphere resources to deploy to:
// torchlight! {&#34;lineNumbers&#34;: true} /* DESCRIPTION: Ubuntu Server 20.04 LTS Kubernetes node variables used by the Packer Plugin for VMware vSphere (vsphere-iso). */ // vSphere Credentials vsphere_endpoint = &#34;vcsa.lab.bowdre.net&#34; vsphere_username = &#34;packer&#34; vsphere_password = &#34;VMware1!&#34; vsphere_insecure_connection = true // vSphere Settings vsphere_datacenter = &#34;NUC Site&#34; vsphere_cluster = &#34;nuc-cluster&#34; vsphere_datastore = &#34;nuchost-local&#34; vsphere_network = &#34;MGT-Home 192.168.1.0&#34; vsphere_folder = &#34;_Templates&#34; I'll then describe the properties of the VM itself:
// torchlight! {&#34;lineNumbers&#34;: true} // Guest Operating System Settings vm_guest_os_language = &#34;en_US&#34; vm_guest_os_keyboard = &#34;us&#34; vm_guest_os_timezone = &#34;America/Chicago&#34; vm_guest_os_type = &#34;ubuntu64Guest&#34; // Virtual Machine Hardware Settings vm_name = &#34;k8s-u2004&#34; vm_firmware = &#34;efi-secure&#34; vm_cdrom_type = &#34;sata&#34; vm_cpu_count = 2 vm_cpu_cores = 1 vm_cpu_hot_add = true vm_mem_size = 2048 vm_mem_hot_add = true vm_disk_size = 30720 vm_disk_controller_type = [&#34;pvscsi&#34;] vm_disk_thin_provisioned = true vm_network_card = &#34;vmxnet3&#34; common_vm_version = 19 common_tools_upgrade_policy = true common_remove_cdrom = true Then I'll configure Packer to convert the VM to a template once the build is finished:
// torchlight! {&#34;lineNumbers&#34;: true} // Template and Content Library Settings common_template_conversion = true common_content_library_name = null common_content_library_ovf = false common_content_library_destroy = true common_content_library_skip_export = true // OVF Export Settings common_ovf_export_enabled = false common_ovf_export_overwrite = true common_ovf_export_path = &#34;&#34; Next, I'll tell it where to find the Ubuntu 20.04 ISO I downloaded and placed on a datastore, along with the SHA256 checksum to confirm its integrity:
// torchlight! {&#34;lineNumbers&#34;: true} // Removable Media Settings common_iso_datastore = &#34;nuchost-local&#34; iso_url = null iso_path = &#34;_ISO&#34; iso_file = &#34;ubuntu-20.04.5-live-server-amd64.iso&#34; iso_checksum_type = &#34;sha256&#34; iso_checksum_value = &#34;5035be37a7e9abbdc09f0d257f3e33416c1a0fb322ba860d42d74aa75c3468d4&#34; And then I'll specify the VM's boot device order, as well as the boot command that will be used for loading the cloud-init coniguration into the Ubuntu installer:
// torchlight! {&#34;lineNumbers&#34;: true} // Boot Settings vm_boot_order = &#34;disk,cdrom&#34; vm_boot_wait = &#34;4s&#34; vm_boot_command = [ &#34;&lt;esc&gt;&lt;wait&gt;&#34;, &#34;linux /casper/vmlinuz --- autoinstall ds=\\&#34;nocloud\\&#34;&#34;, &#34;&lt;enter&gt;&lt;wait&gt;&#34;, &#34;initrd /casper/initrd&#34;, &#34;&lt;enter&gt;&lt;wait&gt;&#34;, &#34;boot&#34;, &#34;&lt;enter&gt;&#34; ] Once the installer is booted and running, Packer will wait until the VM is available via SSH and then use these credentials to log in. (How will it be able to log in with those creds? We'll take a look at the cloud-init configuration in just a minute...)
// torchlight! {&#34;lineNumbers&#34;: true} // Communicator Settings communicator_port = 22 communicator_timeout = &#34;20m&#34; common_ip_wait_timeout = &#34;20m&#34; common_shutdown_timeout = &#34;15m&#34; vm_shutdown_command = &#34;sudo /usr/sbin/shutdown -P now&#34; build_remove_keys = false build_username = &#34;admin&#34; build_password = &#34;VMware1!&#34; ssh_keys = [ &#34;ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOpLvpxilPjpCahAQxs4RQgv+Lb5xObULXtwEoimEBpA builder&#34; ] Finally, I'll create two lists of scripts that will be run on the VM once the OS install is complete. The post_install_scripts will be run immediately after the operating system installation. The update-packages.sh script will cause a reboot, and then the set of pre_final_scripts will do some cleanup and prepare the VM to be converted to a template.
The last bit of this file also designates the desired version of Kubernetes to be installed.
// torchlight! {&#34;lineNumbers&#34;: true} // Provisioner Settings post_install_scripts = [ &#34;scripts/wait-for-cloud-init.sh&#34;, &#34;scripts/cleanup-subiquity.sh&#34;, &#34;scripts/install-ca-certs.sh&#34;, &#34;scripts/disable-multipathd.sh&#34;, &#34;scripts/disable-release-upgrade-motd.sh&#34;, &#34;scripts/persist-cloud-init-net.sh&#34;, &#34;scripts/configure-sshd.sh&#34;, &#34;scripts/install-k8s.sh&#34;, &#34;scripts/update-packages.sh&#34; ] pre_final_scripts = [ &#34;scripts/cleanup-cloud-init.sh&#34;, &#34;scripts/enable-vmware-customization.sh&#34;, &#34;scripts/zero-disk.sh&#34;, &#34;scripts/generalize.sh&#34; ] // Kubernetes Settings k8s_version = &#34;1.25.3&#34; You can find an full example of this file here↗.
user-data.pkrtpl.hcl Okay, so we've covered the Packer framework that creates the VM; now let's take a quick look at the cloud-init configuration that will allow the OS installation to proceed unattended.
See the bits that look \${ like_this }? Those place-holders will take input from the locals block of ubuntu-k8s.pkr.hcl mentioned above. So that's how all the OS properties will get set, including the hostname, locale, LVM partition layout, username, password, and SSH keys.
# torchlight! {&#34;lineNumbers&#34;: true} #cloud-config autoinstall: version: 1 early-commands: - sudo systemctl stop ssh locale: \${ vm_guest_os_language } keyboard: layout: \${ vm_guest_os_keyboard } network: network: version: 2 ethernets: mainif: match: name: e* critical: true dhcp4: true dhcp-identifier: mac ssh: install-server: true allow-pw: true %{ if length( apt_mirror ) &gt; 0 ~} apt: primary: - arches: [default] uri: &#34;\${ apt_mirror }&#34; %{ endif ~} %{ if length( apt_packages ) &gt; 0 ~} packages: %{ for package in apt_packages ~} - \${ package } %{ endfor ~} %{ endif ~} storage: config: # [tl! collapse:start] - ptable: gpt path: /dev/sda wipe: superblock type: disk id: disk-sda - device: disk-sda size: 1024M wipe: superblock flag: boot number: 1 grub_device: true type: partition id: partition-0 - fstype: fat32 volume: partition-0 label: EFIFS type: format id: format-efi - device: disk-sda size: 1024M wipe: superblock number: 2 type: partition id: partition-1 - fstype: xfs volume: partition-1 label: BOOTFS type: format id: format-boot - device: disk-sda size: -1 wipe: superblock number: 3 type: partition id: partition-2 - name: sysvg devices: - partition-2 type: lvm_volgroup id: lvm_volgroup-0 - name: home volgroup: lvm_volgroup-0 size: 4096M wipe: superblock type: lvm_partition id: lvm_partition-home - fstype: xfs volume: lvm_partition-home type: format label: HOMEFS id: format-home - name: tmp volgroup: lvm_volgroup-0 size: 3072M wipe: superblock type: lvm_partition id: lvm_partition-tmp - fstype: xfs volume: lvm_partition-tmp type: format label: TMPFS id: format-tmp - name: var volgroup: lvm_volgroup-0 size: 4096M wipe: superblock type: lvm_partition id: lvm_partition-var - fstype: xfs volume: lvm_partition-var type: format label: VARFS id: format-var - name: log volgroup: lvm_volgroup-0 size: 4096M wipe: superblock type: lvm_partition id: lvm_partition-log - fstype: xfs volume: lvm_partition-log type: format label: LOGFS id: format-log - name: audit volgroup: lvm_volgroup-0 size: 4096M wipe: superblock type: lvm_partition id: lvm_partition-audit - fstype: xfs volume: lvm_partition-audit type: format label: AUDITFS id: format-audit - name: root volgroup: lvm_volgroup-0 size: -1 wipe: superblock type: lvm_partition id: lvm_partition-root - fstype: xfs volume: lvm_partition-root type: format label: ROOTFS id: format-root - path: / device: format-root type: mount id: mount-root - path: /boot device: format-boot type: mount id: mount-boot - path: /boot/efi device: format-efi type: mount id: mount-efi - path: /home device: format-home type: mount id: mount-home - path: /tmp device: format-tmp type: mount id: mount-tmp - path: /var device: format-var type: mount id: mount-var - path: /var/log device: format-log type: mount id: mount-log - path: /var/log/audit device: format-audit type: mount id: mount-audit # [tl! collapse:end] user-data: package_upgrade: true disable_root: true timezone: \${ vm_guest_os_timezone } hostname: \${ vm_guest_os_hostname } users: - name: \${ build_username } passwd: &#34;\${ build_password }&#34; groups: [adm, cdrom, dip, plugdev, lxd, sudo] lock-passwd: false sudo: ALL=(ALL) NOPASSWD:ALL shell: /bin/bash %{ if length( ssh_keys ) &gt; 0 ~} ssh_authorized_keys: %{ for ssh_key in ssh_keys ~} - \${ ssh_key } %{ endfor ~} %{ endif ~} View the full file here↗. (The meta-data file is empty↗, by the way.)
post_install_scripts After the OS install is completed, the shell provisioner will connect to the VM through SSH and run through some tasks. Remember how I keep talking about this build being modular? That goes down to the scripts, too, so I can use individual pieces in other builds without needing to do a lot of tweaking.
You can find all of the scripts here↗.
wait-for-cloud-init.sh This simply holds up the process until the /var/lib/cloud//instance/boot-finished file has been created, signifying the completion of the cloud-init process:
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash -eu echo &#39;&gt;&gt; Waiting for cloud-init...&#39; while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 1 done cleanup-subiquity.sh Next I clean up any network configs that may have been created during the install process:
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash -eu if [ -f /etc/cloud/cloud.cfg.d/99-installer.cfg ]; then sudo rm /etc/cloud/cloud.cfg.d/99-installer.cfg echo &#39;Deleting subiquity cloud-init config&#39; fi if [ -f /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg ]; then sudo rm /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg echo &#39;Deleting subiquity cloud-init network config&#39; fi install-ca-certs.sh The file provisioner mentioned above helpfully copied my custom CA certs to the /tmp/certs/ folder on the VM; this script will install them into the certificate store:
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash -eu echo &#39;&gt;&gt; Installing custom certificates...&#39; sudo cp /tmp/certs/* /usr/local/share/ca-certificates/ cd /usr/local/share/ca-certificates/ for file in *.cer; do sudo mv -- &#34;$file&#34; &#34;\${file%.cer}.crt&#34; done sudo /usr/sbin/update-ca-certificates disable-multipathd.sh This disables multipathd:
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash -eu sudo systemctl disable multipathd echo &#39;Disabling multipathd&#39; disable-release-upgrade-motd.sh And this one disable the release upgrade notices that would otherwise be displayed upon each login:
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash -eu echo &#39;&gt;&gt; Disabling release update MOTD...&#39; sudo chmod -x /etc/update-motd.d/91-release-upgrade persist-cloud-init-net.sh I want to make sure that this VM keeps the same IP address following the reboot that will come in a few minutes, so I 'll set a quick cloud-init option to help make sure that happens:
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/sh -eu echo &#39;&gt;&gt; Preserving network settings...&#39; echo &#39;manual_cache_clean: True&#39; | sudo tee -a /etc/cloud/cloud.cfg configure-sshd.sh Then I just set a few options for the sshd configuration, like disabling root login:
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash -eu echo &#39;&gt;&gt; Configuring SSH&#39; sudo sed -i &#39;s/.*PermitRootLogin.*/PermitRootLogin no/&#39; /etc/ssh/sshd_config sudo sed -i &#39;s/.*PubkeyAuthentication.*/PubkeyAuthentication yes/&#39; /etc/ssh/sshd_config sudo sed -i &#39;s/.*PasswordAuthentication.*/PasswordAuthentication yes/&#39; /etc/ssh/sshd_config install-k8s.sh This script is a little longer and takes care of all the Kubernetes-specific settings and packages that will need to be installed on the VM.
First I enable the required overlay and br_netfilter modules:
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash -eu echo &#34;&gt;&gt; Installing Kubernetes components...&#34; # Configure and enable kernel modules echo &#34;.. configure kernel modules&#34; cat &lt;&lt; EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter Then I'll make some networking tweaks to enable forwarding and bridging:
# torchlight! {&#34;lineNumbers&#34;: true} # Configure networking echo &#34;.. configure networking&#34; cat &lt;&lt; EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sudo sysctl --system Next, set up containerd as the container runtime:
# torchlight! {&#34;lineNumbers&#34;: true} # Setup containerd echo &#34;.. setup containerd&#34; sudo apt-get update &amp;&amp; sudo apt-get install -y containerd apt-transport-https jq sudo mkdir -p /etc/containerd sudo containerd config default | sudo tee /etc/containerd/config.toml sudo systemctl restart containerd Then disable swap:
# torchlight! {&#34;lineNumbers&#34;: true} # Disable swap echo &#34;.. disable swap&#34; sudo sed -i &#39;/[[:space:]]swap[[:space:]]/ s/^\\(.*\\)$/#\\1/g&#39; /etc/fstab sudo swapoff -a Next I'll install the Kubernetes components and (crucially) apt-mark hold them so they won't be automatically upgraded without it being a coordinated change:
# torchlight! {&#34;lineNumbers&#34;: true} # Install Kubernetes echo &#34;.. install kubernetes version \${KUBEVERSION}&#34; sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg echo &#34;deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main&#34; | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update &amp;&amp; sudo apt-get install -y kubelet=&#34;\${KUBEVERSION}&#34;-00 kubeadm=&#34;\${KUBEVERSION}&#34;-00 kubectl=&#34;\${KUBEVERSION}&#34;-00 sudo apt-mark hold kubelet kubeadm kubectl update-packages.sh Lastly, I'll be sure to update all installed packages (excepting the Kubernetes ones, of course), and then perform a reboot to make sure that any new kernel modules get loaded:
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash -eu echo &#39;&gt;&gt; Checking for and installing updates...&#39; sudo apt-get update &amp;&amp; sudo apt-get -y upgrade echo &#39;&gt;&gt; Rebooting!&#39; sudo reboot pre_final_scripts After the reboot, all that's left are some cleanup tasks to get the VM ready to be converted to a template and subsequently cloned and customized.
cleanup-cloud-init.sh I'll start with cleaning up the cloud-init state:
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash -eu echo &#39;&gt;&gt; Cleaning up cloud-init state...&#39; sudo cloud-init clean -l enable-vmware-customization.sh And then be (re)enable the ability for VMware to be able to customize the guest successfully:
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash -eu echo &#39;&gt;&gt; Enabling legacy VMware Guest Customization...&#39; echo &#39;disable_vmware_customization: true&#39; | sudo tee -a /etc/cloud/cloud.cfg sudo vmware-toolbox-cmd config set deployPkg enable-custom-scripts true zero-disk.sh I'll also execute this handy script to free up unused space on the virtual disk. It works by creating a file which completely fills up the disk, and then deleting that file:
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash -eu echo &#39;&gt;&gt; Zeroing free space to reduce disk size&#39; sudo sh -c &#39;dd if=/dev/zero of=/EMPTY bs=1M || true; sync; sleep 1; sync&#39; sudo sh -c &#39;rm -f /EMPTY; sync; sleep 1; sync&#39; generalize.sh Lastly, let's do a final run of cleaning up logs, temporary files, and unique identifiers that don't need to exist in a template. This script will also remove the SSH key with the packer_key identifier since that won't be needed anymore.
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash -eu # Prepare a VM to become a template. echo &#39;&gt;&gt; Clearing audit logs...&#39; sudo sh -c &#39;if [ -f /var/log/audit/audit.log ]; then cat /dev/null &gt; /var/log/audit/audit.log fi&#39; sudo sh -c &#39;if [ -f /var/log/wtmp ]; then cat /dev/null &gt; /var/log/wtmp fi&#39; sudo sh -c &#39;if [ -f /var/log/lastlog ]; then cat /dev/null &gt; /var/log/lastlog fi&#39; sudo sh -c &#39;if [ -f /etc/logrotate.conf ]; then logrotate -f /etc/logrotate.conf 2&gt;/dev/null fi&#39; sudo rm -rf /var/log/journal/* sudo rm -f /var/lib/dhcp/* sudo find /var/log -type f -delete echo &#39;&gt;&gt; Clearing persistent udev rules...&#39; sudo sh -c &#39;if [ -f /etc/udev/rules.d/70-persistent-net.rules ]; then rm /etc/udev/rules.d/70-persistent-net.rules fi&#39; echo &#39;&gt;&gt; Clearing temp dirs...&#39; sudo rm -rf /tmp/* sudo rm -rf /var/tmp/* echo &#39;&gt;&gt; Clearing host keys...&#39; sudo rm -f /etc/ssh/ssh_host_* echo &#39;&gt;&gt; Removing Packer SSH key...&#39; sed -i &#39;/packer_key/d&#39; ~/.ssh/authorized_keys echo &#39;&gt;&gt; Clearing machine-id...&#39; sudo truncate -s 0 /etc/machine-id if [ -f /var/lib/dbus/machine-id ]; then sudo rm -f /var/lib/dbus/machine-id sudo ln -s /etc/machine-id /var/lib/dbus/machine-id fi echo &#39;&gt;&gt; Clearing shell history...&#39; unset HISTFILE history -cw echo &gt; ~/.bash_history sudo rm -f /root/.bash_history Kick out the jams (or at least the build) Now that all the ducks are nicely lined up, let's give them some marching orders and see what happens. All I have to do is open a terminal session to the folder containing the .pkr.hcl files, and then run the Packer build command:
packer packer build -on-error=abort -force . # [tl! .cmd] Flags
The -on-error=abort option makes sure that the build will abort if any steps in the build fail, and -force tells Packer to delete any existing VMs/templates with the same name as the one I'm attempting to build.
And off we go! Packer will output details as it goes which makes it easy to troubleshoot if anything goes wrong. In this case, though, everything works just fine, and I'm met with a happy &quot;success&quot; message! And I can pop over to vSphere to confirm that everything looks right: Next steps My brand new k8s-u2004 template is ready for use! In the next post, I'll walk through the process of manually cloning this template to create my Kubernetes nodes, initializing the cluster, and installing the vSphere integrations. After that process is sorted out nicely, we'll take a look at how to use Terraform to do it all automagically. Stay tuned!
`,description:"Using HashiCorp Packer to automatically build Kubernetes node templates on vSphere.",url:"https://runtimeterror.dev/k8s-on-vsphere-node-template-with-packer/"},"https://runtimeterror.dev/upgrading-standalone-vsphere-host-with-esxcli/":{title:"Upgrading a Standalone vSphere Host With esxcli",tags:["vmware","homelab","vsphere"],content:`You may have heard that there's a new vSphere release out in the wild - vSphere 8, which just reached Initial Availability this week↗. Upgrading the vCenter in my single-host homelab is a very straightforward task, and using the included Lifecycle Manager would make quick work of patching a cluster of hosts... but things get a little trickier with a single host. I could write the installer ISO to a USB drive, boot the host off of that, and go through the install interactively, but what if physical access to the host is kind of inconvenient?
The other option for upgrading a host is using the esxcli command to apply an update from an offline bundle. It's a pretty easy solution (and can even be done remotely, such as when connected to my homelab via the Tailscale node running on my Quartz64 ESXi-ARM host) but I always forget the commands.
So here's quick note on how I upgraded my lone ESXi to the new ESXi 8 IA release so that maybe I'll remember how to do it next time and won't have to go Neeva↗'ing for the answer again.
0: Download the offline bundle Downloading the Offline Bundle from VMware Customer Connect↗ yields a file named VMware-ESXi-8.0-20513097-depot.zip.
1: Transfer the bundle to the host I've found that the easiest way to do this it to copy it to a datastore which is accessible from the host. 2. Power down VMs The host will need to be in maintenance mode in order to apply the upgrade, and since it's a standalone host it won't enter maintenance mode until all of its VMs have been stopped. This can be easily accomplished through the ESXi embedded host client.
3. Place host in maintenance mode I can do that by SSH'ing to the host and running:
esxcli system maintenanceMode set -e true # [tl! .cmd] And can confirm that it happened with:
esxcli system maintenanceMode get # [tl! .cmd] Enabled # [tl! .nocopy] 4. Identify the profile name Because this is an upgrade from one major release to another rather than a simple update, I need to know the name of the profile which will be applied. I can identify that with:
esxcli software sources profile list -d /vmfs/volumes/nuchost-local/_Patches/VMware-ESXi-8.0-20513097-depot.zip # [tl! .cmd] Name Vendor Acceptance Level Creation Time Modification Time # [tl! .nocopy:3] ---------------------------- ------------ ---------------- ------------------- ----------------- ESXi-8.0.0-20513097-standard VMware, Inc. PartnerSupported 2022-09-23T18:59:28 2022-09-23T18:59:28 ESXi-8.0.0-20513097-no-tools VMware, Inc. PartnerSupported 2022-09-23T18:59:28 2022-09-23T18:59:28 Absolute paths
When using the esxcli command to install software/updates, it's important to use absolute paths rather than relative paths. Otherwise you'll get errors and wind up chasing your tail for a while.
In this case, I'll use the ESXi-8.0.0-20513097-standard profile.
5. Install the upgrade Now for the moment of truth:
esxcli software profile update -d /vmfs/volumes/nuchost-local/_Patches/VMware-ESXi-8.0-20513097-depot.zip -p ESXi-8.0.0-20513097-standard # [tl! .cmd] When it finishes (successfully), it leaves a little message that the update won't be complete until the host is rebooted, so I'll go ahead and do that as well:
reboot # [tl! .cmd] And then wait (oh-so-patiently) for the host to come back up.
6. Resume normal operation Once the reboot is complete, log in to the host client to verify the upgrade was successful. You can then exit maintenance mode and start powering on the VMs again.
The upgrade process took me about 20 minutes from start to finish, and now I'm ready to get on with exploring what's new in vSphere 8↗!
`,description:"Using esxcli to upgrade a vSphere host from ESXi 7.x to 8.0.",url:"https://runtimeterror.dev/upgrading-standalone-vsphere-host-with-esxcli/"},"https://runtimeterror.dev/using-vsphere-diagnostic-tool-fling/":{title:"Using the vSphere Diagnostic Tool Fling",tags:["vmware","vsphere","python"],content:`VMware vCenter does wonders for abstracting away the layers of complexity involved in managing a large virtual infrastructure, but when something goes wrong it can be challenging to find exactly where the problem lies. And it can be even harder to proactively address potential issues before they occur.
Fortunately there's a super-handy utility which can making diagnosing vCenter significantly easier, and it comes in the form of the vSphere Diagnostic Tool Fling↗. VDT is a Python script which can be run directly on a vCenter Server appliance (version 6.5 and newer) to quickly check for problems and misconfigurations affecting:
vCenter Basic Info Lookup Service Active Directory vCenter Certificates Core Files Disk Health vCenter DNS vCenter NTP vCenter Port Root Account vCenter Services VCHA For any problems which are identified, VDT will provide simple instructions and/or links to Knowledge Base articles for more detailed instructions on how to proceed with resolving the issues. Sounds pretty useful, right? And yet, somehow, I keep forgetting that VDT is a thing. So here's a friendly reminder to myself of how to obtain and use VDT to fix vSphere woes. Let's get started.
1. Obtain Obtaining the vSphere Diagnostic Tool is very easy. Just point a browser to https://flings.vmware.com/vsphere-diagnostic-tool↗, tick the box to agree to the Technical Preview License, and click the big friendly Download button. It will show up in .zip format.
2. Deploy This needs to be run directly on the vCenter appliance so you'll need to copy the .zip package onto that server. Secure Copy Protocol (SCP)↗ is a great way to make that happen. By default, though, the VCSA uses a limited Appliance Shell which won't allow for file transfers. Fortunately it's easy to follow this KB↗ to switch that to a more conventional bash shell:
SSH to the VCSA. Execute shell to launch the bash shell. Execute chsh -s /bin/bash to set bash as the default shell. Once that's done, just execute this on your local workstation to copy the .zip from your ~/Downloads/ folder to the VCSA's /tmp/ directory:
scp ~/Downloads/vdt-v1.1.4.zip [email protected]:/tmp/ # [tl! .cmd] 3. Extract Now pop back over to an SSH session to the VCSA, extract the .zip, and get ready for action:
cd /tmp # [tl! .cmd_root:1] unzip vdt-v1.1.4.zip Archive: vdt-v1.1.4.zip # [tl! .nocopy:5] 3557676756cffd658fd61aab5a6673269104e83c creating: vdt-v1.1.4/ ... inflating: vdt-v1.1.4/vdt.py cd vdt-v1.1.4/ # [tl! .cmd_root] 4. Execute Now for the fun part:
python vdt.py # [tl! .cmd_root] _________________________ # [tl! .nocopy:7] RUNNING PULSE CHECK Today: Sunday, August 28 19:53:00 Version: 1.1.4 Log Level: INFO Provide password for [email protected]: After entering the SSO password, VDT will run for a few minutes and generate an on-screen report of its findings. Reports can also be found in the /var/log/vmware/vdt/ directory.
5. Review Once the script has completed, it's time to look through the results and fix whatever can be found. As an example, here are some of the findings from my deliberately-broken-for-the-purposes-of-this-post vCenter:
Hostname/PNID mismatch VCENTER BASIC INFO BASIC: Current Time: 2022-08-28 19:54:08.370889 vCenter Uptime: up 2 days vCenter Load Average: 0.26, 0.19, 0.12 Number of CPUs: 2 Total Memory: 11.71 vCenter Hostname: VCSA # [tl! highlight:1] vCenter PNID: vcsa.lab.bowdre.net vCenter IP Address: 192.168.1.12 Proxy Configured: &#34;no&#34; NTP Servers: pool.ntp.org vCenter Node Type: vCenter with Embedded PSC vCenter Version: 7.0.3.00800 - 20150588 DETAILS: vCenter SSO Domain: vsphere.local vCenter AD Domain: No DOMAIN Number of ESXi Hosts: 2 Number of Virtual Machines: 25 Number of Clusters: 1 Disabled Plugins: None [FAIL] The hostname and PNID do not match! # [tl! highlight:1] Please see https://kb.vmware.com/s/article/2130599 for more details. Silly me - I must have changed the hostname at some point, which is not generally a Thing Which Should Be done. I can quickly consult the referenced KB↗ to figure out how to fix my mistake using the /opt/vmware/share/vami/vami_config_net utility.
Missing DNS Nameserver Queries 192.168.1.5 [FAIL] DNS with UDP - unable to resolve vcsa to 192.168.1.12 # [tl! highlight:2] [FAIL] Reverse DNS - unable to resolve 192.168.1.12 to vcsa [FAIL] DNS with TCP - unable to resolve vcsa to 192.168.1.12 Commands used: dig +short &lt;fqdn&gt; &lt;nameserver&gt; dig +noall +answer -x &lt;ip&gt; &lt;namserver&gt; dig +short +tcp &lt;fqdn&gt; &lt;nameserver&gt; RESULT: [FAIL] # [tl! highlight:1] Please see KB: https://kb.vmware.com/s/article/54682 Whoops - I guess I should go recreate the appropriate DNS records.
Old core files CORE FILE CHECK INFO: These core files are older than 72 hours. consider deleting them at your discretion to reduce the size of log bundles. FILES: /storage/core/core.SchedulerCron.p.11919 Size: 34.36MB Last Modified: 2022-08-03T22:28:01 /storage/core/core.python.1445 Size: 20.8MB Last Modified: 2022-08-03T22:13:37 /storage/core/core.python.27513 Size: 41.12MB Last Modified: 2022-07-28T04:43:55 /storage/core/core.ParGC.6 Size: 802.82MB Last Modified: 2022-07-28T04:38:54 /storage/core/core.python.12536 Size: 39.82MB Last Modified: 2022-07-28T04:18:41 /storage/core/core.python.50223 Size: 281.55MB Last Modified: 2022-07-13T22:22:13 /storage/core/core.lsassd.56082 Size: 256.82MB Last Modified: 2022-07-13T22:16:53 /storage/core/core.SchedulerCron.p.21078 Size: 29.52MB Last Modified: 2022-06-25T11:05:01 /storage/core/core.python.19815 Size: 25.44MB Last Modified: 2022-06-25T11:03:06 /storage/core/core.python.51946 Size: 25.8MB Last Modified: 2022-06-18T10:22:08 /storage/core/core.python.40291 Size: 25.44MB Last Modified: 2022-06-13T11:21:26 /storage/core/core.python.14872 Size: 43.97MB Last Modified: 2022-06-13T10:35:04 /storage/core/core.python.11833 Size: 20.71MB Last Modified: 2022-06-13T10:30:01 /storage/core/core.python.35275 Size: 42.87MB Last Modified: 2022-06-13T07:17:27 /storage/core/core.VM.6 Size: 1.21GB Last Modified: 2022-06-13T00:38:56 [INFO] Number of core files: 15 Those core files can be useful for investigating specific issues, but holding on to them long-term doesn't really do much good. After checking to be sure I don't need them, I can get rid of them all pretty easily like so:
find /storage/core/ -name &#34;core.*&#34; -type f -mtime +3 -exec rm {} \\; # [tl! .cmd_root] NTP status VC NTP CHECK [FAIL] NTP and Host time are both disabled! Oh yeah, let's turn that back on with systemctl start ntpd.
Account status Root Account Check [FAIL] Root password expires in 13 days Please search for &#39;Change the Password of the Root User&#39; in vCenter documentation. That's a good thing to know. I'll take care of that↗ while I'm thinking about it.
chage -M -1 -E -1 root # [tl! .cmd_root] Recheck Now that I've corrected these issues, I can run VDT again to confirm that everything is back in a good state:
VCENTER BASIC INFO BASIC: Current Time: 2022-08-28 20:13:25.192503 vCenter Uptime: up 2 days vCenter Load Average: 0.28, 0.14, 0.10 Number of CPUs: 2 Total Memory: 11.71 vCenter Hostname: vcsa.lab.bowdre.net # [tl! highlight:1] vCenter PNID: vcsa.lab.bowdre.net vCenter IP Address: 192.168.1.12 Proxy Configured: &#34;no&#34; NTP Servers: pool.ntp.org vCenter Node Type: vCenter with Embedded PSC vCenter Version: 7.0.3.00800 - 20150588 DETAILS: vCenter SSO Domain: vsphere.local vCenter AD Domain: No DOMAIN Number of ESXi Hosts: 2 Number of Virtual Machines: 25 Number of Clusters: 1 Disabled Plugins: None [...] Nameserver Queries 192.168.1.5 [PASS] DNS with UDP - resolved vcsa.lab.bowdre.net to 192.168.1.12 # [tl! highlight:2] [PASS] Reverse DNS - resolved 192.168.1.12 to vcsa.lab.bowdre.net [PASS] DNS with TCP - resolved vcsa.lab.bowdre.net to 192.168.1.12 Commands used: dig +short &lt;fqdn&gt; &lt;nameserver&gt; dig +noall +answer -x &lt;ip&gt; &lt;namserver&gt; dig +short +tcp &lt;fqdn&gt; &lt;nameserver&gt; RESULT: [PASS] # [tl! highlight] [...] CORE FILE CHECK [PASS] Number of core files: 0 # [tl! highlight:1] [PASS] Number of hprof files: 0 [...] NTP Status Check # [tl! collapse:start] +-----------------------------------LEGEND-----------------------------------+ | remote: NTP peer server | | refid: server that this peer gets its time from | | when: number of seconds passed since last response | | poll: poll interval in seconds | | delay: round-trip delay to the peer in milliseconds | | offset: time difference between the server and client in milliseconds | +-----------------------------------PREFIX-----------------------------------+ | * Synchronized to this peer | | # Almost synchronized to this peer | | + Peer selected for possible synchronization | | – Peer is a candidate for selection | | ~ Peer is statically configured | +----------------------------------------------------------------------------+ # [tl! collapse:end] remote refid st t when poll reach delay offset jitter ============================================================================== *104.171.113.34 130.207.244.240 2 u 1 64 17 16.831 -34.597 0.038 RESULT: [PASS] # [tl! highlight] [...] Root Account Check [PASS] Root password never expires # [tl! highlight] All better!
Conclusion The vSphere Diagnostic Tool makes a great addition to your arsenal of troubleshooting skills and utilities. It makes it easy to troubleshoot errors which might occur in your vSphere environment, as well as to uncover dormant issues which could cause serious problems in the future.
`,description:"Save time and energy by using the VMware vSphere Diagnostic Tool to quickly investigate potential configuration problems in your VMware environment.",url:"https://runtimeterror.dev/using-vsphere-diagnostic-tool-fling/"},"https://runtimeterror.dev/removing-recreating-vcls-vms/":{title:"Removing and Recreating vCLS VMs",tags:["vmware","vsphere","homelab"],content:`Way back in 2020, VMware released vSphere 7 Update 1 and introduced the new vSphere Clustering Services (vCLS)↗ to improve how cluster services like the Distributed Resource Scheduler (DRS) operate. vCLS deploys lightweight agent VMs directly on the cluster being managed, and those VMs provide a decoupled and distributed control plane to offload some of the management responsibilities from the vCenter server.
That's very cool, particularly in large continent-spanning environments or those which reach into multiple clouds, but it may not make sense to add those additional workloads in resource-constrained homelabs1. And while the vCLS VMs are supposed to be automagically self-managed, sometimes things go a little wonky and that management fails to function correctly, which can negatively impact DRS. Recovering from such a scenario is complicated by the complete inability to manage the vCLS VMs through the vSphere UI.
Fortunately there's a somewhat-hidden way to disable (and re-enable) vCLS on a per-cluster basis, and it's easy to do once you know the trick. This can help if you want to permanently disable vCLS (like in a lab environment) or if you just need to turn it off and on again2 to clean up and redeploy uncooperative agent VMs.
Proceed at your own risk
Disabling vCLS will break DRS, and could have other unintended side effects. Don't do this in prod if you can avoid it.
Find the cluster's domain ID It starts with determining the affected cluster's domain ID, which is very easy to do once you know where to look. Simply browse to the cluster object in the vSphere inventory, and look at the URL: That ClusterComputeResource:domain-c13 portion tells me exactly what I need to know: the ID for the NUC Cluster is domain-c13.
Disable vCLS for a cluster With that information gathered, you're ready to do the deed. Select the vCenter object in your vSphere inventory, head to the Configure tab, and open the Advanced Settings item.
Now click the Edit Settings button to open the editor panel. You'll need to create a new advanced setting so scroll to the bottom of the panel and enter:
Setting Name Value config.vcls.clusters.domain-[id].enabled false Then click Add and Save to apply the change.
Within moments, the vCLS VM(s) will be powered off and deleted: Re-enable vCLS If you need to bring back vCLS (such as when troubleshooting a problematic cluster), that's as simple as changing the advanced setting again:
Setting Name Value config.vcls.clusters.domain-[id].enabled true And the VM(s) will be automatically recreated as needed: Or when running the ESXi-ARM Fling, where the vCLS VMs aren't able to be created and will just fill up the Tasks list with failures↗.&#160;&#x21a9;&#xfe0e;
&#160;&#x21a9;&#xfe0e;
`,description:"How to remove and (optionally) recreate the vSphere Clustering Services VMs",url:"https://runtimeterror.dev/removing-recreating-vcls-vms/"},"https://runtimeterror.dev/gitea-self-hosted-git-server/":{title:"Gitea: Ultralight Self-Hosted Git Server",tags:["caddy","linux","docker","cloud","tailscale","selfhosting"],content:`I recently started using Obsidian↗ for keeping notes, tracking projects, and just generally organizing all the information that would otherwise pass into my brain and then fall out the other side. Unlike other similar solutions which operate entirely in The Cloud, Obsidian works with Markdown files stored in a local folder1, which I find to be very attractive. Not only will this allow me to easily transfer my notes between apps if I find something I like better than Obsidian, but it also opens the door to using git to easily back up all this important information.
Some of the contents might be somewhat sensitive, though, and I'm not sure I'd want to keep that data on a service outside of my control. A self-hosted option would be ideal. Gitlab seemed like an obvious choice, but the resource requirements are a bit higher than would be justified by my single-user use case. I eventually came across Gitea↗, a lightweight Git server with a simple web interface (great for a Git novice like myself!) which boasts the ability to run on a Raspberry Pi. This sounded like a great candidate for running on an Ampere ARM-based compute instance↗ in my Oracle Cloud free tier↗ environment!
In this post, I'll describe what I did to get Gitea up and running on a tiny ARM-based cloud server (though I'll just gloss over the cloud-specific configurations), as well as how I'm leveraging Tailscale to enable SSH Git access without having to expose that service to the internet. I based the bulk of this on the information provided in Gitea's Install With Docker↗ documentation.
Create the server I'll be deploying this on a cloud server with these specs:
Shape VM.Standard.A1.Flex Image Ubuntu 22.04 CPU Count 1 Memory (GB) 6 Boot Volume (GB) 50 I've described the process of creating a new instance on OCI in a past post so I won't reiterate that here. The only gotcha this time is switching the shape to VM.Standard.A1.Flex; the OCI free tier↗ allows two AMD Compute VMs (which I've already used up) as well as up to four ARM Ampere A1 instances2.
Prepare the server Once the server's up and running, I go through the usual steps of applying any available updates:
sudo apt update # [tl! .cmd:1] sudo apt upgrade Install Tailscale And then I'll install Tailscale using their handy-dandy bootstrap script:
curl -fsSL https://tailscale.com/install.sh | sh # [tl! .cmd] When I bring up the Tailscale interface, I'll use the --advertise-tags flag to identify the server with an ACL tag↗. (Within my tailnet3, all of my other clients are able to connect to devices bearing the cloud tag but cloud servers can only reach back to other devices for performing DNS lookups.)
sudo tailscale up --advertise-tags &#34;tag:cloud&#34; # [tl! .cmd] Install Docker Next I install Docker and docker-compose:
sudo apt install ca-certificates curl gnupg lsb-release # [tl! .cmd:2] curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo \\ &#34;deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \\ $(lsb_release -cs) stable&#34; | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null sudo apt update # [tl! .cmd:1] sudo apt install docker-ce docker-ce-cli containerd.io docker-compose docker-compose-plugin Configure firewall This server automatically had an iptables firewall rule configured to permit SSH access. For Gitea, I'll also need to configure HTTP/HTTPS access. As before, I need to be mindful of the explicit REJECT all rule at the bottom of the INPUT chain:
sudo iptables -L INPUT --line-numbers # [tl! .cmd] Chain INPUT (policy ACCEPT) # [tl! .nocopy:8] num target prot opt source destination 1 ts-input all -- anywhere anywhere 2 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED 3 ACCEPT icmp -- anywhere anywhere 4 ACCEPT all -- anywhere anywhere 5 ACCEPT udp -- anywhere anywhere udp spt:ntp 6 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh 7 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited So I'll insert the new rules at line 6:
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT # [tl! .cmd:1] sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT And confirm that it did what I wanted it to:
sudo iptables -L INPUT --line-numbers # [tl! focus .cmd] Chain INPUT (policy ACCEPT) # [tl! .nocopy:10] num target prot opt source destination 1 ts-input all -- anywhere anywhere 2 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED 3 ACCEPT icmp -- anywhere anywhere 4 ACCEPT all -- anywhere anywhere 5 ACCEPT udp -- anywhere anywhere udp spt:ntp 6 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https # [tl! focus:1] 7 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http 8 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh 9 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited That looks good, so let's save the new rules:
sudo netfilter-persistent save # [tl! .cmd] run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save # [tl! .nocopy:1] run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save Cloud Firewall
Of course I will also need to create matching rules in the cloud firewall, but I'm going not going to detail those steps again here. And since I've now got Tailscale up and running I can remove the pre-created rule to allow SSH access through the cloud firewall.
Install Gitea I'm now ready to move on with installing Gitea itself.
Prepare git user I'll start with creating a git user. This account will be set as the owner of the data volume used by the Gitea container, but will also (perhaps more importantly) facilitate SSH passthrough↗ into the container for secure git operations.
Here's where I create the account and also generate what will become the SSH key used by the git server:
sudo useradd -s /bin/bash -m git # [tl! .cmd:1] sudo -u git ssh-keygen -t ecdsa -C &#34;Gitea Host Key&#34; The git user's SSH public key gets added as-is directly to that user's authorized_keys file:
sudo -u git cat /home/git/.ssh/id_ecdsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys # [tl! .cmd:1] sudo -u git chmod 600 /home/git/.ssh/authorized_keys When other users add their SSH public keys into Gitea's web UI, those will get added to authorized_keys with a little something extra: an alternate command to perform git actions instead of just SSH ones:
command=&#34;/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-1&#34;,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty &lt;user pubkey&gt; Not just yet
No users have added their keys to Gitea just yet so if you look at /home/git/.ssh/authorized_keys right now you won't see this extra line, but I wanted to go ahead and mention it to explain the next step. It'll show up later. I promise.
So I'll go ahead and create that extra command:
cat &lt;&lt;&#34;EOF&#34; | sudo tee /usr/local/bin/gitea # [tl! .cmd] #!/bin/sh ssh -p 2222 -o StrictHostKeyChecking=no [email protected] &#34;SSH_ORIGINAL_COMMAND=\\&#34;$SSH_ORIGINAL_COMMAND\\&#34; $0 $@&#34; EOF sudo chmod +x /usr/local/bin/gitea # [tl! .cmd] So when I use a git command to interact with the server via SSH, the commands will get relayed into the Docker container on port 2222.
Create docker-compose definition That takes care of most of the prep work, so now I'm ready to create the docker-compose.yaml file which will tell Docker how to host Gitea.
I'm going to place this in /opt/gitea:
sudo mkdir -p /opt/gitea # [tl! .cmd:1] cd /opt/gitea And I want to be sure that my new git user owns the ./data directory which will be where the git contents get stored:
sudo mkdir data # [tl! .cmd:1] sudo chown git:git -R data Now to create the file:
sudo vi docker-compose.yaml # [tl! .cmd] The basic contents of the file came from the Gitea documentation for Installation with Docker↗, but I also included some (highlighted) additional environment variables based on the Configuration Cheat Sheet↗:
docker-compose.yaml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 # torchlight! {&#34;lineNumbers&#34;: true} version: &#34;3&#34; networks: gitea: external: false services: server: image: gitea/gitea:latest container_name: gitea environment: - USER_UID=1003 # [tl! highlight:1] - USER_GID=1003 - GITEA__database__DB_TYPE=postgres - GITEA__database__HOST=db:5432 - GITEA__database__NAME=gitea - GITEA__database__USER=gitea - GITEA__database__PASSWD=gitea - GITEA____APP_NAME=Gitea # [tl! highlight:start] - GITEA__log__MODE=file - GITEA__openid__ENABLE_OPENID_SIGNIN=false - GITEA__other__SHOW_FOOTER_VERSION=false - GITEA__repository__DEFAULT_PRIVATE=private - GITEA__repository__DISABLE_HTTP_GIT=true - GITEA__server__DOMAIN=git.bowdre.net - GITEA__server__SSH_DOMAIN=git.tadpole-jazz.ts.net - GITEA__server__ROOT_URL=https://git.bowdre.net/ - GITEA__server__LANDING_PAGE=explore - GITEA__service__DISABLE_REGISTRATION=true - GITEA__service_0X2E_explore__DISABLE_USERS_PAGE=true - GITEA__ui__DEFAULT_THEME=arc-green # [tl! highlight:end] restart: always networks: - gitea volumes: - ./data:/data - /home/git/.ssh/:/data/git/.ssh # [tl! highlight] - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro ports: - &#34;3000:3000&#34; - &#34;127.0.0.1:2222:22&#34; # [tl! highlight] depends_on: - db db: image: postgres:14 container_name: gitea_db restart: always environment: - POSTGRES_USER=gitea - POSTGRES_PASSWORD=gitea - POSTGRES_DB=gitea networks: - gitea volumes: - ./postgres:/var/lib/postgresql/data Pin the PostgreSQL version
The format of PostgreSQL data changes with new releases, and that means that the data created by different major releases are not compatible. Unless you take steps to upgrade the data format, you'll have problems when a new major release of PostgreSQL arrives. Avoid the headache: pin this to a major version (as I did with image: postgres:14 above) so you can upgrade on your terms.
Let's go through the extra configs in a bit more detail:
Variable setting Purpose USER_UID=1003 User ID of the git user on the container host USER_GID=1003 GroupID of the git user on the container host GITEA____APP_NAME=Gitea Sets the title of the site. I shortened it from Gitea: Git with a cup of tea because that seems unnecessarily long. GITEA__log__MODE=file Enable logging GITEA__openid__ENABLE_OPENID_SIGNIN=false Disable signin through OpenID GITEA__other__SHOW_FOOTER_VERSION=false Anyone who hits the web interface doesn't need to know the version GITEA__repository__DEFAULT_PRIVATE=private All repos will default to private unless I explicitly override that GITEA__repository__DISABLE_HTTP_GIT=true Require that all Git operations occur over SSH GITEA__server__DOMAIN=git.bowdre.net Domain name of the server GITEA__server__SSH_DOMAIN=git.tadpole-jazz.ts.net Leverage Tailscale's MagicDNS↗ to tell clients how to SSH to the Tailscale internal IP GITEA__server__ROOT_URL=https://git.bowdre.net/ Public-facing URL GITEA__server__LANDING_PAGE=explore Defaults to showing the &quot;Explore&quot; page (listing any public repos) instead of the &quot;Home&quot; page (which just tells about the Gitea project) GITEA__service__DISABLE_REGISTRATION=true New users will not be able to self-register for access; they will have to be manually added by the Administrator account that will be created during the initial setup GITEA__service_0X2E_explore__DISABLE_USERS_PAGE=true Don't allow browsing of user accounts GITEA__ui__DEFAULT_THEME=arc-green Default to the darker theme Beyond the environment variables, I also defined a few additional options to allow the SSH passthrough to function. Mounting the git user's SSH config directory into the container will ensure that user keys defined in Gitea will also be reflected outside of the container, and setting the container to listen on local port 2222 will allow it to receive the forwarded SSH connections:
volumes: # [tl! focus] - ./data:/data - /home/git/.ssh/:/data/git/.ssh # [tl! focus] - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro ports: # [tl! focus] - &#34;3000:3000&#34; - &#34;127.0.0.1:2222:22&#34; # [tl! focus] With the config in place, I'm ready to fire it up:
Start containers Starting Gitea is as simple as
sudo docker-compose up -d # [tl! .cmd] which will spawn both the Gitea server as well as a postgres database to back it.
Gitea will be listening on port 3000.... which isn't exposed outside of the VM it's running on so I can't actually do anything with it just yet. Let's see about changing that.
Configure Caddy reverse proxy I've written before about Caddy server↗ and how simple it makes creating a reverse proxy with automatic HTTPS. While Gitea does include built-in HTTPS support↗, configuring that to work within Docker seems like more work to me.
Install Caddy So exactly how simple does Caddy make this? Well let's start with installing Caddy on the system:
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https # [tl! .cmd:4] curl -1sLf &#39;https://dl.cloudsmith.io/public/caddy/stable/gpg.key&#39; | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf &#39;https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt&#39; | sudo tee /etc/apt/sources.list.d/caddy-stable.list sudo apt update sudo apt install caddy Configure Caddy Configuring Caddy is as simple as creating a Caddyfile:
sudo vi /etc/caddy/Caddyfile # [tl! .cmd] Within that file, I tell it which fully-qualified domain name(s) I'd like it to respond to (and manage SSL certificates for), as well as that I'd like it to function as a reverse proxy and send the incoming traffic to the same port 3000 that used by the Docker container:
git.bowdre.net { reverse_proxy localhost:3000 } That's it. I don't need to worry about headers or ACME configurations or anything else. Those three lines are all that's required for this use case. It almost seems too easy!
Start Caddy All that's left at this point is to start up Caddy:
sudo systemctl enable caddy # [tl! .cmd:2] sudo systemctl start caddy sudo systemctl restart caddy I found that the restart is needed to make sure that the config file gets loaded correctly. And after a moment or two, I can point my browser over to https://git.bowdre.net and see the default landing page, complete with a valid certificate.
Configure Gitea Now that Gitea is installed, I'll need to go through the initial configuration process to actually be able to use it. Fortunately most of this stuff was taken care of by all the environment variables I crammed into the the docker-compose.yaml file earlier. All I really need to do is create an administrative user: I can now press the friendly Install Gitea button, and after just a few seconds I'll be able to log in with that new administrator account.
Create user account I don't want to use that account for all my git actions though so I click on the menu at the top right and select the Site Administration option: From there I can navigate to the User Accounts tab and use the Create User Account button to make a new account: And then I can log out and log back in with my new non-admin identity!
Add SSH public key Associating a public key with my new Gitea account will allow me to easily authenticate my pushes from the command line. I can create a new SSH public/private keypair by following GitHub's instructions↗:
ssh-keygen -t ed25519 -C &#34;[email protected]&#34; # [tl! .cmd] I'll view the contents of the public key - and go ahead and copy the output for future use:
cat ~/.ssh/id_ed25519.pub # [tl! .cmd] ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5ExSsQfr6pAFBEZ7yx0oljSnpnOixvp8DS26STcx2J [email protected] # [tl! .nocopy] Back in the Gitea UI, I'll click the user menu up top and select Settings, then the SSH / GPG Keys tab, and click the Add Key button:
I can give the key a name and then paste in that public key, and then click the lower Add Key button to insert the new key.
To verify that the SSH passthrough magic I configured earlier is working, I can take a look at git's authorized_keys file:
sudo tail -2 /home/git/.ssh/authorized_keys # [tl! .cmd] # gitea public key [tl! .nocopy:1] command=&#34;/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-3&#34;,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,no-user-rc,restrict ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5ExSsQfr6pAFBEZ7yx0oljSnpnOixvp8DS26STcx2J [email protected] Hey - there's my public key, being preceded by the customized command I defined earlier. There's one last thing I'd like to do before I get to populating my new server with content...
Configure Fail2ban I'm already limiting this server's exposure by blocking inbound SSH (except for what's magically tunneled through Tailscale) at the Oracle Cloud firewall, but I still have to have TCP ports 80 and 443 open for the web interface. It would be nice if those web ports didn't get hammered with invalid login attempts.
Fail2ban↗ can help with that by monitoring log files for repeated authentication failures and then creating firewall rules to block the offender.
Installing Fail2ban is simple:
sudo apt update # [tl! .cmd:1] sudo apt install fail2ban Then I need to tell Fail2ban what to look for when detecting failed logins to Gitea. This can often be a tedious process of crawling through logs looking for example failure messages, but fortunately the Gitea documentation↗ tells me what I need to know.
Specifically, I'll want to watch log/gitea.log for messages like the following:
2018/04/26 18:15:54 [I] Failed authentication attempt for user from xxx.xxx.xxx.xxx 2020/10/15 16:08:44 ...s/context/context.go:204:HandleText() [E] invalid credentials from xxx.xxx.xxx.xxx So let's create that filter:
sudo vi /etc/fail2ban/filter.d/gitea.conf # [tl! .cmd] # torchlight! {&#34;lineNumbers&#34;: true} # /etc/fail2ban/filter.d/gitea.conf [Definition] failregex = .*(Failed authentication attempt|invalid credentials).* from &lt;HOST&gt; ignoreregex = Next I create the jail, which tells Fail2ban what to do:
sudo vi /etc/fail2ban/jail.d/gitea.conf # [tl! .cmd] # torchlight! {&#34;lineNumbers&#34;: true} # /etc/fail2ban/jail.d/gitea.conf [gitea] enabled = true filter = gitea logpath = /opt/gitea/data/gitea/log/gitea.log maxretry = 5 findtime = 3600 bantime = 86400 action = iptables-allports This configures Fail2ban to watch the log file (logpath) inside the data volume mounted to the Gitea container for messages which match the pattern I just configured (gitea). If a system fails to log in 5 times (maxretry) within 1 hour (findtime, in seconds) then the offending IP will be banned for 1 day (bantime, in seconds).
Then I just need to enable and start Fail2ban:
sudo systemctl enable fail2ban # [tl! .cmd:1] sudo systemctl start fail2ban To verify that it's working, I can deliberately fail to log in to the web interface and watch /var/log/fail2ban.log:
sudo tail -f /var/log/fail2ban.log # [tl! .cmd] 2022-07-17 21:52:26,978 fail2ban.filter [36042]: INFO [gitea] Found \${MY_HOME_IP}| - 2022-07-17 21:52:26 # [tl! .nocopy] Excellent, let's now move on to creating some content.
Work with Gitea Mirror content from GitHub As an easy first sync, I'm going to simply link a new repository on this server to an existing one I have at GitHub, namely this one↗ which I'm using to track some of my vRealize work. I'll set this up as a one-way mirror so that it will automatically pull in any new upstream changes but new commits made through Gitea will stay in Gitea. And I'll do that by clicking the + button at the top right and selecting New Migration.
Gitea includes support for easy migrations from several content sources: I pick the GitHub one and then plug in the details of the GitHub repo: And after just a few moments, all the content from my GitHub repo shows up in my new Gitea one: You might noticed that I unchecked the Make Repository Private option for this one, so feel free to browse the mirrored repo at https://git.bowdre.net/vPotato/vrealize↗ if you'd like to check out Gitea for yourself.
Create a new repo The real point of this whole exercise was to sync my Obsidian vault to a Git server under my control, so it's time to create a place for that content to live. I'll go to the + menu again but this time select New Repository, and then enter the required information: Once it's created, the new-but-empty repository gives me instructions on how I can interact with it. Note that the SSH address uses the special git.tadpole-jazz.ts.net Tailscale domain name which is only accessible within my tailnet.
Now I can follow the instructions to initialize my local Obsidian vault (stored at ~/obsidian-vault/) as a git repository and perform my initial push to Gitea:
cd ~/obsidian-vault/ # [tl! .cmd:5] git init git add . git commit -m &#34;initial commit&#34; git remote add origin [email protected]:john/obsidian-vault.git git push -u origin main And if I refresh the page in my browser, I'll see all that content which has just been added: Conclusion So now I've got a lightweight, web-enabled, personal git server running on a (free!) cloud server under my control. It's working brilliantly in conjunction with the community-maintained obsidian-git↗ plugin for keeping my notes synced across my various computers. On Android, I'm leveraging the free GitJournal↗ app as a simple git client for pulling the latest changes (as described on another blog I found↗).
Obsidian does offer a paid Sync↗ plugin for keeping the content on multiple devices in sync, but it's somewhat spendy at $10 month. And much of the appeal of using a Markdown-based system for managing my notes is being in full control of the content. Plus I wanted an excuse to build a git server.&#160;&#x21a9;&#xfe0e;
The first 3000 OCPU hours and 18,000 GB hours per month are free, equivalent to 4 OCPUs and 24 GB of memory allocated however you see fit.&#160;&#x21a9;&#xfe0e;
Tailscale's term↗ for the private network which securely links Tailscale-connected devices.&#160;&#x21a9;&#xfe0e;
`,description:"Deploying the lightweight Gitea  Git server on Oracle Cloud's free Ampere Compute.",url:"https://runtimeterror.dev/gitea-self-hosted-git-server/"},"https://runtimeterror.dev/getting-started-vra-rest-api/":{title:"Getting Started with the vRealize Automation REST API",tags:["vmware","vra","javascript","vro","automation","rest","api"],content:`I've been doing a bit of work lately to make my vRealize Automation setup more flexible and dynamic and less dependent upon hardcoded values. To that end, I thought it was probably about time to learn how to interact with the vRA REST API. I wrote this post to share what I've learned and give a quick crash course on how to start doing things with the API.
Exploration Toolkit Swagger It can be difficult to figure out where to start when learning a new API. Fortunately, VMware thoughtfully included a Swagger↗ specification for the API so that we can explore it in an interactive way. This is available for on-prem vRA environments at https://{vra-fqdn}/automation-ui/api-docs/ (so https://vra.lab.bowdre.net/automation-ui/api-docs/ in my case). You can also browse most of it online at www.mgmt.cloud.vmware.com/automation-ui/api-docs/↗1. Playing with Swagger on your on-prem instance will even let you perform requests straight from the browser, which can be a great way to gain familiarity with how requests should be structured.
I'm ultimately going to be working with the Infrastructure as a Service API but before I can do that I'll need to talk to the Identity API to log in. So let's start the exploration there, with the Login Controller.
That tells me that I'll need to send a POST request to the endpoint at /csp/gateway/am/api/login, and I'll need to include username, password, domain, and scope in the request body. I can click the Try it out button to take this endpoint for a spin and just insert appropriate values in the request:2 After hitting Execute, the Swagger UI will populate the Responses section with some useful information, like how the request would be formatted for use with curl: So I could easily replicate this using the curl utility by just copying and pasting the following into a shell:
curl -X &#39;POST&#39; \\ # [tl! .cmd] &#39;https://vra.lab.bowdre.net/csp/gateway/am/api/login&#39; \\ -H &#39;accept: */*&#39; \\ -H &#39;Content-Type: application/json&#39; \\ -d &#39;{ &#34;username&#34;: &#34;vra&#34;, &#34;password&#34;: &#34;********&#34;, &#34;domain&#34;: &#34;lab.bowdre.net&#34;, &#34;scope&#34;: &#34;&#34; }&#39; Scrolling further reveals the authentication token returned by the identity service: I can copy the contents of that cspAuthToken field and use it for authenticating other API operations. For instance, I'll go to the Infrastructure as a Service API Swagger UI and click the Authorize button at the top of the screen: And then paste the token into the header as Bearer [token]: Now I can go find an IaaS API that I'm interested in querying (like /iaas/api/flavor-profiles to see which flavor mappings are defined in vRA), and hit Try it out and then Execute: And here's the result:
// torchlight! {&#34;lineNumbers&#34;: true} { &#34;content&#34;: [ { &#34;flavorMappings&#34;: { &#34;mapping&#34;: { &#34;1vCPU | 2GB [tiny]&#34;: { // [tl! focus] &#34;cpuCount&#34;: 1, &#34;memoryInMB&#34;: 2048 }, &#34;1vCPU | 1GB [micro]&#34;: { // [tl! focus] &#34;cpuCount&#34;: 1, &#34;memoryInMB&#34;: 1024 }, &#34;2vCPU | 4GB [small]&#34;: { // [tl! focus] &#34;cpuCount&#34;: 2, &#34;memoryInMB&#34;: 4096 } }, // [tl! collapse:5] &#34;_links&#34;: { &#34;region&#34;: { &#34;href&#34;: &#34;/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f&#34; } } },// [tl! collapse:start] &#34;externalRegionId&#34;: &#34;Datacenter:datacenter-39056&#34;, &#34;cloudAccountId&#34;: &#34;75d29635-f128-4b85-8cf9-95a9e5981c68&#34;, &#34;name&#34;: &#34;&#34;, &#34;id&#34;: &#34;61ebe5bf-5f55-4dee-8533-7ad05c067dd9-3617c011-39db-466e-a7f3-029f4523548f&#34;, &#34;updatedAt&#34;: &#34;2022-05-05&#34;, &#34;organizationId&#34;: &#34;61ebe5bf-5f55-4dee-8533-7ad05c067dd9&#34;, &#34;orgId&#34;: &#34;61ebe5bf-5f55-4dee-8533-7ad05c067dd9&#34;, &#34;_links&#34;: { &#34;self&#34;: { &#34;href&#34;: &#34;/iaas/api/flavor-profiles/61ebe5bf-5f55-4dee-8533-7ad05c067dd9-3617c011-39db-466e-a7f3-029f4523548f&#34; }, &#34;region&#34;: { &#34;href&#34;: &#34;/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f&#34; } // [tl! collapse:end] } }, { &#34;flavorMappings&#34;: { &#34;mapping&#34;: { &#34;2vCPU | 8GB [medium]&#34;: { // [tl! focus] &#34;cpuCount&#34;: 2, &#34;memoryInMB&#34;: 8192 }, &#34;1vCPU | 2GB [tiny]&#34;: { // [tl! focus] &#34;cpuCount&#34;: 1, &#34;memoryInMB&#34;: 2048 }, &#34;8vCPU | 16GB [giant]&#34;: { // [tl! focus] &#34;cpuCount&#34;: 8, &#34;memoryInMB&#34;: 16384 }, &#34;1vCPU | 1GB [micro]&#34;: { // [tl! focus] &#34;cpuCount&#34;: 1, &#34;memoryInMB&#34;: 1024 }, &#34;2vCPU | 4GB [small]&#34;: { // [tl! focus] &#34;cpuCount&#34;: 2, &#34;memoryInMB&#34;: 4096 }, &#34;4vCPU | 12GB [large]&#34;: { // [tl! focus] &#34;cpuCount&#34;: 4, &#34;memoryInMB&#34;: 12288 } }, // [tl! collapse:5] &#34;_links&#34;: { &#34;region&#34;: { &#34;href&#34;: &#34;/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136&#34; } } }, // [tl! collapse:start] &#34;externalRegionId&#34;: &#34;Datacenter:datacenter-1001&#34;, &#34;cloudAccountId&#34;: &#34;75d29635-f128-4b85-8cf9-95a9e5981c68&#34;, &#34;name&#34;: &#34;&#34;, &#34;id&#34;: &#34;61ebe5bf-5f55-4dee-8533-7ad05c067dd9-c0d2a662-9ee5-4a27-9a9e-e92a72668136&#34;, &#34;updatedAt&#34;: &#34;2022-05-05&#34;, &#34;organizationId&#34;: &#34;61ebe5bf-5f55-4dee-8533-7ad05c067dd9&#34;, &#34;orgId&#34;: &#34;61ebe5bf-5f55-4dee-8533-7ad05c067dd9&#34;, &#34;_links&#34;: { &#34;self&#34;: { &#34;href&#34;: &#34;/iaas/api/flavor-profiles/61ebe5bf-5f55-4dee-8533-7ad05c067dd9-c0d2a662-9ee5-4a27-9a9e-e92a72668136&#34; }, &#34;region&#34;: { &#34;href&#34;: &#34;/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136&#34; } } // [tl! collapse:end] } ], &#34;totalElements&#34;: 2, &#34;numberOfElements&#34;: 2 } So that API call will tell me about the Flavor Profiles as well as which Region the profiles belong to.
As you can see, Swagger can really help to jump-start the exploration of a new API, but it can get a bit clumsy for repeated queries. And while I could just use curl for further API exercises, I'd rather use a tool built specifically for API tomfoolery:
HTTPie HTTPie↗ is a handy command-line utility optimized for interacting with web APIs. This will make things easier as I dig deeper.
Installing the Debian package↗ is a piece of cake pie3:
curl -SsL https://packages.httpie.io/deb/KEY.gpg | sudo apt-key add - # [tl! .cmd:3] sudo curl -SsL -o /etc/apt/sources.list.d/httpie.list https://packages.httpie.io/deb/httpie.list sudo apt update sudo apt install httpie Once installed, running http will give me a quick overview of how to use this new tool:
http # [tl! .cmd] usage: # [tl! .nocopy:start] http [METHOD] URL [REQUEST_ITEM ...] error: the following arguments are required: URL for more information: run &#39;http --help&#39; or visit https://httpie.io/docs/cli # [tl! .nocopy:end] HTTPie cleverly interprets anything passed after the URL as a request item↗, and it determines the item type based on a simple key/value syntax:
Each request item is simply a key/value pair separated with the following characters: : (headers), = (data field, e.g., JSON, form), := (raw data field), == (query parameters), @ (file upload).
So my earlier request for an authentication token becomes:
https POST vra.lab.bowdre.net/csp/gateway/am/api/login username=&#39;vra&#39; password=&#39;********&#39; domain=&#39;lab.bowdre.net&#39; # [tl! .cmd] Working with Self-Signed Certificates
If your vRA endpoint is using a self-signed or otherwise untrusted certificate, pass the HTTPie option --verify=no to ignore certificate errors:
https --verify=no POST [URL] [REQUEST_ITEMS] # [tl! .cmd] Running that will return a bunch of interesting headers but I'm mainly interested in the response body:
{ &#34;cspAuthToken&#34;: &#34;eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlh[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-npz4Csv5FwcXt0fa&#34; } There's the auth token4 that I'll need for subsequent requests. I'll store that in a variable so that it's easier to wield:
token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlh[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-npz4Csv5FwcXt0fa # [tl! .cmd] So now if I want to find out which images have been configured in vRA, I can ask:
https GET vra.lab.bowdre.net/iaas/api/images &#34;Authorization: Bearer $token&#34; # [tl! .cmd] Request Items
Remember from above that HTTPie will automatically insert key/value pairs separated by a colon into the request header.
And I'll get back some headers followed by an JSON object detailing the defined image mappings broken up by region:
// torchlight! {&#34;lineNumbers&#34;: true} { &#34;content&#34;: [ { &#34;_links&#34;: { &#34;region&#34;: { &#34;href&#34;: &#34;/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f&#34; } }, &#34;externalRegionId&#34;: &#34;Datacenter:datacenter-39056&#34;, &#34;mapping&#34;: { &#34;Photon 4&#34;: { // [tl! focus] &#34;_links&#34;: { &#34;region&#34;: { &#34;href&#34;: &#34;/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f&#34; // [tl! focus] } }, &#34;cloudConfig&#34;: &#34;&#34;, &#34;constraints&#34;: [], &#34;description&#34;: &#34;photon-arm&#34;, &#34;externalId&#34;: &#34;50023810-ae56-3c58-f374-adf6e0645886&#34;, &#34;externalRegionId&#34;: &#34;Datacenter:datacenter-39056&#34;, &#34;id&#34;: &#34;8885e87d8a5898cf12b5abc3e5c715e5a65f7179&#34;, &#34;isPrivate&#34;: false, &#34;name&#34;: &#34;photon-arm&#34;, &#34;osFamily&#34;: &#34;LINUX&#34; } } }, { &#34;_links&#34;: { &#34;region&#34;: { &#34;href&#34;: &#34;/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136&#34; } }, &#34;externalRegionId&#34;: &#34;Datacenter:datacenter-1001&#34;, &#34;mapping&#34;: { &#34;Photon 4&#34;: { // [tl! focus] &#34;_links&#34;: { &#34;region&#34;: { &#34;href&#34;: &#34;/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136&#34; // [tl! focus] } }, &#34;cloudConfig&#34;: &#34;&#34;, &#34;constraints&#34;: [], &#34;description&#34;: &#34;photon&#34;, &#34;externalId&#34;: &#34;50028cf1-88b8-52e8-58a1-b8354d4207b0&#34;, &#34;externalRegionId&#34;: &#34;Datacenter:datacenter-1001&#34;, &#34;id&#34;: &#34;d417648249e9740d7561188fa2a3a3ab4e8ccf85&#34;, &#34;isPrivate&#34;: false, &#34;name&#34;: &#34;photon&#34;, &#34;osFamily&#34;: &#34;LINUX&#34; }, &#34;Windows Server 2019&#34;: { // [tl! focus] &#34;_links&#34;: { &#34;region&#34;: { &#34;href&#34;: &#34;/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136&#34; // [tl! focus] } }, &#34;cloudConfig&#34;: &#34;&#34;, &#34;constraints&#34;: [], &#34;description&#34;: &#34;ws2019&#34;, &#34;externalId&#34;: &#34;500235ad-1022-fec3-8ad1-00433beee103&#34;, &#34;externalRegionId&#34;: &#34;Datacenter:datacenter-1001&#34;, &#34;id&#34;: &#34;7e05f4e57ac55135cf7a7f8b951aa8ccfcc335d8&#34;, &#34;isPrivate&#34;: false, &#34;name&#34;: &#34;ws2019&#34;, &#34;osFamily&#34;: &#34;WINDOWS&#34; } } } ], &#34;numberOfElements&#34;: 2, &#34;totalElements&#34;: 2 } This doesn't give me the name of the regions, but I could use the _links.region.href data to quickly match up images which exist in a given region.5
You'll notice that HTTPie also prettifies the JSON response to make it easy for humans to parse. This is great for experimenting with requests against different API endpoints and getting a feel for what data can be found where. And firing off tests in HTTPie can be a lot quicker (and easier to format) than with other tools.
Now let's take what we've learned and see about implementing it as vRO actions.
vRealize Orchestrator actions My immediate goal for this exercise is create a set of vRealize Orchestrator actions which take in a zone/location identifier from the Cloud Assembly request and return a list of images which are available for deployment there. I'll start with some utility actions to do the heavy lifting, and then I'll be able to call them from other actions as things get more complicated/interesting. Before I can do that, though, I'll need to add the vRA instance as an HTTP REST endpoint in vRO.
This post brought to you by...
A lot of what follows was borrowed heavily from a very helpful post by Oktawiusz Poranski over at Automate Clouds↗ so be sure to check out that site for more great tips on working with APIs!
Creating the endpoint I can use the predefined Add a REST host workflow to create the needed endpoint. Configuring the host properties here is very simple: just give it a name and the URL of the vRA instance.
On the Authentication tab, I set the auth type to NONE; I'll handle the authentication steps directly.
With those parameters in place I can kick off the workflow. Once it completes, I can check Administration &gt; Inventory &gt; HTTP-REST to see the newly-created endpoint: Creating a Configuration for the endpoint I don't want to hardcode any endpoints or credentials into my vRO actions so I'm also going to create a Configuration to store those details. This will make it easy to reference those same details from multiple actions as well.
To create a new Configuration, I'll head to Assets &gt; Configurations, select a folder (or create a new one) where I want the Configuration to live, and then click the New Configuration button. I'm going to call this new Configuration Endpoints since I plan to use it for holding host/credentials for additional endpoints in the future. After giving it a name, I'll go ahead and click Save to preserve my efforts thus far.
I'll then click over to the Variables tab and create a new variable to store my vRA endpoint details; I'll call it vRAHost, and hit the Type dropdown and select New Composite Type.
This new composite type will let me use a single variable to store multiple values - basically everything I'll need to interact with a single REST endpoint:
Variable Type host REST:RESTHost username string domain string password SecureString I can then map the appropriate values for this new variable and hit Create. I make sure to Save my work and then gaze at the new Configuration's majesty: Okay, enough prep work - let's get into some Actions!
Utility actions getConfigValue action I'll head into Library &gt; Actions to create a new action inside my com.virtuallypotato.utility module. This action's sole purpose will be to extract the details out of the configuration element I just created. Right now I'm only concerned with retrieving the one vRAHost configuration but I'm a fan of using generic pluggable modules where possible. This one will work to retrieve the value of any variable defined in any configuration element so I'll call it getConfigValue.
Input Type Description path string Path to Configuration folder configurationName string Name of Configuration variableName string Name of desired variable inside Configuration // torchlight! {&#34;lineNumbers&#34;: true} /* JavaScript: getConfigValue action Inputs: path (string), configurationName (string), variableName (string) Return type: string */ var configElement = null; for each (configElement in Server.getConfigurationElementCategoryWithPath(path).configurationElements) { if (configElement.name.indexOf(configurationName) === 0) { break; }; } var attribValue = configElement.getAttributeWithKey(variableName).value; return attribValue; vraLogin action Next, I'll create another action in my com.virtuallypotato.utility module which will use the getConfigValue action to retrieve the endpoint details. It will then submit a POST request to that endpoint to log in, and it will return the authentication token. Later actions will be able to call upon vraLogin to grab a token and then pass that back to the IaaS API in the request headers - but I'm getting ahead of myself. Let's get the login sorted:
// torchlight! {&#34;lineNumbers&#34;: true} /* JavaScript: vraLogin action Inputs: none Return type: string */ var restHost = System.getModule(&#34;com.virtuallypotato.utility&#34;).getConfigValue(&#34;vPotato&#34;, &#34;Endpoints&#34;, &#34;vRAHost&#34;); var host = restHost.host; var loginObj = { domain: restHost.domain, password: restHost.password, username: restHost.username }; var loginJson = JSON.stringify(loginObj); var request = host.createRequest(&#34;POST&#34;, &#34;/csp/gateway/am/api/login&#34;, loginJson); request.setHeader(&#34;Content-Type&#34;, &#34;application/json&#34;); var response = request.execute(); var token = JSON.parse(response.contentAsString).cspAuthToken; System.debug(&#34;Created vRA API session: &#34; + token); return token; vraLogout action I like to clean up after myself so I'm also going to create a vraLogout action in my com.virtuallypotato.utility module to tear down the API session when I'm finished with it.
Input Type Description token string Auth token of the session to destroy // torchlight! {&#34;lineNumbers&#34;: true} /* JavaScript: vraLogout action Inputs: token (string) Return type: string */ var host = System.getModule(&#34;com.virtuallypotato.utility&#34;).getConfigValue(&#34;vPotato&#34;, &#34;Endpoints&#34;, &#34;vRAHost&#34;).host; var logoutObj = { idToken: token }; var logoutJson = JSON.stringify(logoutObj); var request = host.createRequest(&#34;POST&#34;, &#34;/csp/gateway/am/api/auth/logout&#34;, logoutJson); request.setHeader(&#34;Content-Type&#34;, &#34;application/json&#34;); request.execute().statusCode; System.debug(&#34;Terminated vRA API session: &#34; + token); vraExecute action My final &quot;utility&quot; action for this effort will run in between vraLogin and vraLogout, and it will handle making the actual API call and returning the results. This way I won't have to implement the API handler in every single action which needs to talk to the API - they can just call my new action, vraExecute.
Input Type Description token string Auth token from vraLogin method string REST Method (GET, POST, etc.) uri string Path to API controller (/iaas/api/flavor-profiles) content string Any additional data to pass with the request // torchlight! {&#34;lineNumbers&#34;: true} /* JavaScript: vraExecute action Inputs: token (string), method (string), uri (string), content (string) Return type: string */ var host = System.getModule(&#34;com.virtuallypotato.utility&#34;).getConfigValue(&#34;vPotato&#34;, &#34;Endpoints&#34;, &#34;vRAHost&#34;).host; System.log(host); if (content) { var request = host.createRequest(method, uri, content); } else { var request = host.createRequest(method, uri); } request.setHeader(&#34;Content-Type&#34;, &#34;application/json&#34;); request.setHeader(&#34;Authorization&#34;, &#34;Bearer &#34; + token); var response = request.execute(); var statusCode = response.statusCode; var responseContent = response.contentAsString; if (statusCode &gt; 399) { System.error(responseContent); throw &#34;vraExecute action failed, status code: &#34; + statusCode; } return responseContent; Bonus: vraTester action That's it for the core utility actions - but wouldn't it be great to know that this stuff works before moving on to handling the request input? Enter vraTester! It will be handy to have an action I can test vRA REST requests in before going all-in on a solution.
This action will:
Call vraLogin to get an API token. Call vraExecute with that token, a REST method, and an API endpoint to retrieve some data. Call vraLogout to terminate the API session. Return the data so we can see if it worked. Other actions wanting to interact with the vRA REST API will follow the same basic formula, though with some more logic and capability baked in.
Anyway, here's my first swing:
// torchlight! {&#34;lineNumbers&#34;: true} /* JavaScript: vraTester action Inputs: none Return type: string */ var token = System.getModule(&#39;com.virtuallypotato.utility&#39;).vraLogin(); var result = JSON.parse(System.getModule(&#39;com.virtuallypotato.utility&#39;).vraExecute(token, &#39;GET&#39;, &#39;/iaas/api/zones&#39;)).content; System.log(JSON.stringify(result)); System.getModule(&#39;com.virtuallypotato.utility&#39;).vraLogout(token); return JSON.stringify(result); Pretty simple, right? Let's see if it works: It did! Though that result is a bit hard to parse visually, so I'm going to prettify it a bit:
// torchlight! {&#34;lineNumbers&#34;: true} [ { &#34;tags&#34;: [], &#34;tagsToMatch&#34;: [ { &#34;key&#34;: &#34;compute&#34;, &#34;value&#34;: &#34;nuc&#34; } ], &#34;placementPolicy&#34;: &#34;DEFAULT&#34;, &#34;customProperties&#34;: { &#34;zone_overlapping_migrated&#34;: &#34;true&#34; }, &#34;folder&#34;: &#34;vRA_Deploy&#34;, &#34;externalRegionId&#34;: &#34;Datacenter:datacenter-1001&#34;, &#34;cloudAccountId&#34;: &#34;75d29635-f128-4b85-8cf9-95a9e5981c68&#34;, &#34;name&#34;: &#34;NUC&#34;, // [tl! focus] &#34;id&#34;: &#34;3d4f048a-385d-4759-8c04-117a170d060c&#34;, &#34;updatedAt&#34;: &#34;2022-06-02&#34;, &#34;organizationId&#34;: &#34;61ebe5bf-5f55-4dee-8533-7ad05c067dd9&#34;, &#34;orgId&#34;: &#34;61ebe5bf-5f55-4dee-8533-7ad05c067dd9&#34;, &#34;_links&#34;: { &#34;projects&#34;: { &#34;hrefs&#34;: [ &#34;/iaas/api/projects/9c3d1e73-1276-42e7-8d8e-cac40251a29e&#34; ] }, &#34;computes&#34;: { &#34;href&#34;: &#34;/iaas/api/zones/3d4f048a-385d-4759-8c04-117a170d060c/computes&#34; }, &#34;self&#34;: { &#34;href&#34;: &#34;/iaas/api/zones/3d4f048a-385d-4759-8c04-117a170d060c&#34; }, &#34;region&#34;: { &#34;href&#34;: &#34;/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136&#34; // [tl! focus] }, &#34;cloud-account&#34;: { &#34;href&#34;: &#34;/iaas/api/cloud-accounts/75d29635-f128-4b85-8cf9-95a9e5981c68&#34; } } }, { &#34;tags&#34;: [], &#34;tagsToMatch&#34;: [ { &#34;key&#34;: &#34;compute&#34;, &#34;value&#34;: &#34;qtz&#34; } ], &#34;placementPolicy&#34;: &#34;DEFAULT&#34;, &#34;customProperties&#34;: { &#34;zone_overlapping_migrated&#34;: &#34;true&#34; }, &#34;externalRegionId&#34;: &#34;Datacenter:datacenter-39056&#34;, &#34;cloudAccountId&#34;: &#34;75d29635-f128-4b85-8cf9-95a9e5981c68&#34;, &#34;name&#34;: &#34;QTZ&#34;, // [tl! focus] &#34;id&#34;: &#34;84470591-74a2-4659-87fd-e5d174a679a2&#34;, &#34;updatedAt&#34;: &#34;2022-06-02&#34;, &#34;organizationId&#34;: &#34;61ebe5bf-5f55-4dee-8533-7ad05c067dd9&#34;, &#34;orgId&#34;: &#34;61ebe5bf-5f55-4dee-8533-7ad05c067dd9&#34;, &#34;_links&#34;: { &#34;projects&#34;: { &#34;hrefs&#34;: [ &#34;/iaas/api/projects/9c3d1e73-1276-42e7-8d8e-cac40251a29e&#34; ] }, &#34;computes&#34;: { &#34;href&#34;: &#34;/iaas/api/zones/84470591-74a2-4659-87fd-e5d174a679a2/computes&#34; }, &#34;self&#34;: { &#34;href&#34;: &#34;/iaas/api/zones/84470591-74a2-4659-87fd-e5d174a679a2&#34; }, &#34;region&#34;: { &#34;href&#34;: &#34;/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f&#34; // [tl! focus] }, &#34;cloud-account&#34;: { &#34;href&#34;: &#34;/iaas/api/cloud-accounts/75d29635-f128-4b85-8cf9-95a9e5981c68&#34; } } } ] I can see that it returned two Cloud Zones, named NUC (for my NUC 9 host) and QTZ (for my Quartz64 SBC running ESXi-ARM). Each Zone also includes data about other objects associated with the Zone, such as that _links.region.href property I mentioned earlier.
My compute targets live at the Zone level, each Zone lives inside a given Region, and the Image Profiles reside within a Region. See where this is going? I now have the information I need to link a Zone to the Image Profiles available within that Zone.
Input actions So now I'm ready to work on the actions that will handle passing information between the vRA REST API and the Cloud Assembly deployment request form. For organization purposes, I'll stick them in a new module which I'll call com.virtuallypotato.inputs. And for the immediate purposes, I'm going to focus on just two fields in the request form: Zone (for where the VM should be created) and Image (for what VM template it will be spawned from). The Zone dropdown will be automatically populated when the form loads, and Image options will show up as soon as a the user has selected a Zone.
vraGetZones action This action will basically just repeat the call that I tested above in vraTester, but parse the result to extract just the Zone names. It will pop those into an array of strings which can be rendered as a dropdown on the Cloud Assembly side.
// torchlight! {&#34;lineNumbers&#34;: true} /* JavaScript: vraGetZones action Inputs: none Return type: Array/string */ var zoneNames = new Array(); var token = System.getModule(&#34;com.virtuallypotato.utility&#34;).vraLogin(); var zones = JSON.parse(System.getModule(&#34;com.virtuallypotato.utility&#34;).vraExecute(token, &#34;GET&#34;, &#34;/iaas/api/zones&#34;, null)).content; zones.forEach( function (zone) { zoneNames.push(zone.name); } ); zoneNames.sort(); System.getModule(&#34;com.virtuallypotato.utility&#34;).vraLogout(token); return zoneNames; vraGetImages action Once the user has selected a Zone from the dropdown, the vraGetImages action will first contact the same /iaas/api/zones API to get the same list of available Zones. It will look through that list to find the one with the matching name, and then extract the ._links.region.href URI for the Zone.
Next it will reach out to /iaas/api/images to retrieve all the available images. For each image, it will compare its associated ._links.region.href URI to that of the designated Zone; if there's a match, the action will add the image to an array of strings which will then be returned back to the request form.
Oh, and the whole thing is wrapped in a conditional so that the code only executes when zoneName has been set on the request form; otherwise it simply returns an empty string.
Input Type Description zoneName string The name of the Zone selected in the request form // torchlight! {&#34;lineNumbers&#34;: true} /* JavaScript: vraGetImages action Inputs: zoneName (string) Return type: array/string */ if (!(zoneName == &#34;&#34; || zoneName == null)) { var arrImages = new Array(); var regionUri = null; var token = System.getModule(&#34;com.virtuallypotato.utility&#34;).vraLogin(); var zones = JSON.parse(System.getModule(&#34;com.virtuallypotato.utility&#34;).vraExecute(token, &#34;GET&#34;, &#34;/iaas/api/zones&#34;, null)).content; System.debug(&#34;Zones: &#34; + JSON.stringify(zones)); for each (zone in zones) { if (zone.name === zoneName) { System.debug(&#34;Matching zone: &#34; + zone.name); regionUri = zone._links.region.href; } if (regionUri != null) { break; }; } System.debug(&#34;Matching region URI: &#34; + regionUri); var images = JSON.parse(System.getModule(&#34;com.virtuallypotato.utility&#34;).vraExecute(token, &#34;GET&#34;, &#34;/iaas/api/images&#34;, null)).content; System.debug(&#34;Images: &#34; + JSON.stringify(images)); images.forEach( function (image) { if (image._links.region.href === regionUri) { System.debug(&#34;Images in region: &#34; + JSON.stringify(image.mapping)); for (var i in image.mapping) { System.debug(&#34;Image: &#34; + i); arrImages.push(i); } } } ); arrImages.sort(); System.getModule(&#34;com.virtuallypotato.utility&#34;).vraLogout(token); return arrImages; } else { return [&#34;&#34;]; } I'll use the Debug button to test this action real quick-like, providing the NUC Zone as the input: It works! Well, at least when called directly. Let's see how it does when called from Cloud Assembly.
Cloud Assembly request For now I'm really only testing using my new vRO actions so my Cloud Template is going to be pretty basic. I'm not even going to add any resources to the template; I don't even need it to be deployable.
What I do need are two inputs. I'd normally just write the inputs directly as YAML, but the syntax for referencing vRO actions can be a bit tricky and I don't want to mess it up. So I pop over to the Inputs tab in the editor pane on the right and click the New Cloud Template Input button.
I'll start with an input called zoneName, which will be a string: I'll then click on More Options and scroll to the bottom to select that the field data should come from an External Source: I click the Select button and can then search for the vraGetZones action I wish to use: And then hit Create to add the new input to the template.
Next I'll repeat the same steps to create a new image input. This time, though, when I select the vraGetImages action I'll also need to select another input to bind to the zoneName parameter: The full code for my template now looks like this:
# torchlight! {&#34;lineNumbers&#34;: true} formatVersion: 1 inputs: zoneName: type: string title: Zone $dynamicEnum: /data/vro-actions/com.virtuallypotato.inputs/vraGetZones image: type: string title: Image $dynamicEnum: /data/vro-actions/com.virtuallypotato.inputs/vraGetImages?zoneName={{zoneName}} resources: {} And I can use the Test button at the bottom of the Cloud Assembly template editor to confirm that everything works:
It does!
Conclusion This has been a very quick introduction on how to start pulling data from the vRA APIs, but it (hopefully) helps to consolidate all the knowledge and information I had to find when I started down this path - and maybe it will give you some ideas on how you can use this ability within your own vRA environment.
In the near future, I'll also have a post on how to do the same sort of things with the vCenter REST API, and I hope to follow that up with a deeper dive on all the tricks I've used to make my request forms as dynamic as possible with the absolute minimum of hardcoded data in the templates. Let me know in the comments if there are any particular use cases you'd like me to explore further.
Until next time!
The online version is really intended for the vRealize Automation Cloud hosted solution. It can be a useful reference but some APIs are missing.&#160;&#x21a9;&#xfe0e;
This request form is pure plaintext so you'd never have known that my password is actually ******** if I hadn't mentioned it. Whoops!&#160;&#x21a9;&#xfe0e;
&#160;&#x21a9;&#xfe0e;
Well, most of it.&#160;&#x21a9;&#xfe0e;
That knowledge will come in handy later.&#160;&#x21a9;&#xfe0e;
`,description:"Using HTTPie and Swagger to learn about interacting with the VMware vRealize Automation REST API, and then leveraging vRealize Orchestrator actions with API queries to dynamically populate a vRA request form.",url:"https://runtimeterror.dev/getting-started-vra-rest-api/"},"https://runtimeterror.dev/esxi-arm-on-quartz64/":{title:"ESXi ARM Edition on the Quartz64 SBC",tags:["vmware","linux","chromeos","homelab","tailscale","photon","vpn"],content:` ESXi-ARM Fling v1.10 Update
On July 20, 2022, VMware released a major update↗ for the ESXi-ARM Fling. Among other fixes and improvements↗, this version enables in-place ESXi upgrades and adds support for the Quartz64's on-board NIC↗. To update, I:
Wrote the new ISO installer to another USB drive. Attached the installer drive to the USB hub, next to the existing ESXi drive. Booted the installer and selected to upgrade ESXi on the existing device. Powered-off post-install, unplugged the hub, and attached the ESXi drive directly to the USB2 port on the Quart64. Connected the ethernet cable to the onboard NIC. Booted to ESXi. Once booted, I used the DCUI to (re)configure the management network and activate the onboard network adapter. Now I've got directly-attached USB storage, and the onboard NIC provides gigabit connectivity. I've made a few tweaks to the rest of the article to reflect the lifting of those previous limitations.
Up until this point, my homelab has consisted of just a single Intel NUC9 ESXi host running a bunch of VMs. It's served me well but lately I've been thinking that it would be good to have an additional host for some of my workloads. In particular, I'd like to have a Tailscale node on my home network which isn't hosted on the NUC so that I can patch ESXi remotely without cutting off my access. I appreciate the small footprint of the NUC so I'm not really interested in a large &quot;grown-up&quot; server at this time. So for now I thought it might be fun to experiment with VMware's ESXi on ARM fling↗ which makes it possible to run a full-fledged VMWare hypervisor on a Raspbery Pi.
Of course, I decided to embark upon this project at a time when Raspberry Pis are basically impossible to get. So instead I picked up a PINE64 Quartz64↗ single-board computer (SBC) which seems like a potentially very-capable piece of hardware.... but there is a prominent warning at the bottom of the store page↗:
Be Advised
&quot;The Quartz64 SBC still in early development stage, only suitable for developers and advanced users wishing to contribute to early software development. Both mainline and Rockchip’s BSP fork of Linux have already been booted on the platform and development is proceeding quickly, but it will be months before end-users and industry partners can reliably deploy it. If you need a single board computer for a private or industrial application today, we encourage you to choose a different board from our existing lineup or wait a few months until Quartz64 software reaches a sufficient degree of maturity.&quot;
More specifically, for my use case there will be a number of limitations (at least for now - this SBC is still pretty new to the market so hopefully support will be improving further over time):
The onboard NIC is not supported by ESXi.1 Onboard storage (via eMMC, eSATA, or PCIe) is not supported. The onboard microSD slot is only used for loading firmware on boot, not for any other storage. Only two (of the four) USB ports are documented to work reliably. Of the remaining two ports, the lower USB3 port shouldn't be depended upon either↗ so I'm really just stuck with a single USB2 interface which will need to handle both networking and storage1.2 All that is to say that (as usual) I'll be embarking upon this project in Hard Mode - and I'll make it extra challenging (as usual) by doing all of the work from a Chromebook. In any case, here's how I managed to get ESXi running on the the Quartz64 SBC and then deploy a small workload.
Bill of Materials Let's start with the gear (hardware and software) I needed to make this work:
Hardware Purpose PINE64 Quartz64 Model-A 8GB Single Board Computer↗ kind of the whole point ROCKPro64 12V 5A US Power Supply↗ provides power for the the SBC Serial Console “Woodpecker” Edition↗ allows for serial console access Google USB-C Adapter↗ connects the console adapter to my Chromebook Sandisk 64GB Micro SD Memory Card↗ only holds the firmware; a much smaller size would be fine Monoprice USB-C MicroSD Reader↗ to write firmware to the SD card from my Chromebook Samsung MUF-256AB/AM FIT Plus 256GB USB 3.1 Drive↗ ESXi boot device and local VMFS datastore 3D-printed open enclosure for QUARTZ64↗ protect the board a little bit while allowing for plenty of passive airflow Downloads Purpose ESXi ARM Edition↗ (v1.10) hypervisor Tianocore EDK II firmware for Quartz64↗ (2022-07-20) firmare image Chromebook Recovery Utility↗ easy way to write filesystem images to external media Beagle Term↗ for accessing the Quartz64 serial console Preparation Firmware media The very first task is to write the required firmware image (download here↗) to a micro SD card. I used a 64GB card that I had lying around but you could easily get by with a much smaller one; the firmware image is tiny, and the card can't be used for storing anything else. Since I'm doing this on a Chromebook, I'll be using the Chromebook Recovery Utility (CRU)↗ for writing the images to external storage as described in another post.
After downloading QUARTZ64_EFI.img.gz↗, I need to get it into a format recognized by CRU and, in this case, that means extracting the gzipped archive and then compressing the .img file into a standard .zip:
gunzip QUARTZ64_EFI.img.gz # [tl! .cmd:1] zip QUARTZ64_EFI.img.zip QUARTZ64_EFI.img I can then write it to the micro SD card by opening CRU, clicking on the gear icon, and selecting the Use local image option.
ESXi installation media I'll also need to prepare the ESXi installation media (download here↗). For that, I'll be using a 256GB USB drive. Due to the limited storage options on the Quartz64, I'll be installing ESXi onto the same drive I use to boot the installer so, in this case, the more storage the better. By default, ESXi 7.0 will consume up to 128GB for the new ESX-OSData partition; whatever is leftover will be made available as a VMFS datastore. That could be problematic given the unavailable/flaky USB support of the Quartz64. (While you can install ESXi onto a smaller drive, down to about ~20GB, the lack of additional storage on this hardware makes it pretty important to take advantage of as much space as you can.)
In any case, to make the downloaded VMware-VMvisor-Installer-7.0-20133114.aarch64.iso writeable with CRU all I need to do is add .bin to the end of the filename:
mv VMware-VMvisor-Installer-7.0-20133114.aarch64.iso{,.bin} # [tl! .cmd] Then it's time to write the image onto the USB drive: Console connection I'll need to use the Quartz64 serial console interface and &quot;Woodpecker&quot; edition console USB adapter↗ to interact with the board until I get ESXi installed and can connect to it with the web interface or SSH. The adapter comes with a short breakout cable, and I connect it thusly:
Quartz64 GPIO pin Console adapter pin Wire color 6 GND Brown 8 RXD Red 10 TXD Orange I leave the yellow wire dangling free on both ends since I don't need a +V connection for the console to work. To verify that I've got things working, I go ahead and pop the micro SD card containing the firmware into its slot on the bottom side of the Quartz64 board, connect the USB console adapter to my Chromebook, and open the Beagle Term↗ app to set up the serial connection.
I'll need to use these settings for the connection (which are the defaults selected by Beagle Term):
Setting Value Port /dev/ttyUSB0 Bitrate 115200 Data Bit 8 bit Parity none Stop Bit 1 Flow Control none I hit Connect and then connect the Quartz64's power supply. I watch as it loads the firmware and then launches the BIOS menu: Host creation ESXi install Now that I've got everything in order I can start the install. A lot of experimentation on my part confirmed the sad news about the USB ports: of the four USB ports, only the top-right USB2 port works reliably for me. So I connect my USB NIC+hub to that port, and plug in my 256GB drive to the hub1 256GB USB drive there. This isn't ideal from a performance aspect, of course, but slow storage is more useful than no storage.
On that note, remember what I mentioned earlier about how the ESXi installer would want to fill up ~128GB worth of whatever drive it targets? The ESXi ARM instructions say that you can get around that by passing the autoPartitionOSDataSize advanced option to the installer by pressing [Shift] + O in the ESXi bootloader, but the Quartz64-specific instructions say that you can't do that with this board since only the serial console is available... It turns out this is a (happy) lie.
I hooked up a monitor to the board's HDMI port and a USB keyboard to a free port on the hub and verified that the keyboard let me maneuver through the BIOS menu. From here, I hit the Reset button on the Quartz64 to restart it and let it boot from the connected USB drive. When I got to the ESXi pre-boot countdown screen, I pressed [Shift] + O as instructed and added autoPartitionOSDataSize=8192 to the boot options. This limits the size of the new-for-ESXi7 ESX-OSData VMFS-L volume to 8GB and will give me much more space for the local datastore.
Beyond that it's a fairly typical ESXi install process: Initial configuration After the installation completed, I rebooted the host and watched for the Direct Console User Interface (DCUI) to come up: I hit [F2] and logged in with the root credentials to get to the System Customization menu: The host automatically received an IP issued by DHCP but I'd like for it to instead use a static IP. I'll also go ahead and configure the appropriate DNS settings. I also create the appropriate matching A and PTR records in my local DNS, and (after bouncing the management network) I can access the ESXi Embedded Host Client at https://quartzhost.lab.bowdre.net: That's looking pretty good... but what's up with that date and time? Time has kind of lost all meaning in the last couple of years but I'm reasonably certain that January 1, 2001 was at least a few years ago. And I know from past experience that incorrect host time will prevent it from being successfully imported to a vCenter inventory.
Let's clear that up by enabling the Network Time Protocol (NTP) service on this host. I'll do that by going to Manage &gt; System &gt; Time &amp; Date and clicking the Edit NTP Settings button. I don't run a local NTP server so I'll point it at pool.ntp.org and set the service to start and stop with the host: Now I hop over to the Services tab, select the ntpd service, and then click the Start button there. Once it's running, I then restart ntpd to help encourage the system to update the time immediately. Once the service is started I can go back to Manage &gt; System &gt; Time &amp; Date, click the Refresh button, and confirm that the host has been updated with the correct time: With the time sorted, I'm just about ready to join this host to my vCenter, but first I'd like to take a look at the storage situation - after all, I did jump through those hoops with the installer to make sure that I would wind up with a useful local datastore. Upon going to Storage &gt; More storage &gt; Devices and clicking on the single listed storage device, I can see in the Partition Diagram that the ESX-OSData VMFS-L volume was indeed limited to 8GB, and the free space beyond that was automatically formatted as a VMFS datastore: And I can also take a peek at that local datastore: With 200+ gigabytes of free space on the datastore I should have ample room for a few lightweight VMs.
Adding to vCenter Alright, let's go ahead and bring the new host into my vCenter environment. That starts off just like any other host, by right-clicking an inventory location in the Hosts &amp; Clusters view and selecting Add Host. Success! I've now got a single-board hypervisor connected to my vCenter. Now let's give that host a workload.3
Workload creation As I mentioned earlier, my initial goal is to deploy a Tailscale node on my new host so that I can access my home network from outside of the single-host virtual lab environment. I've become a fan of using VMware's Photon OS↗ so I'll get a VM deployed and then install the Tailscale agent.
Deploying Photon OS VMware provides Photon in a few different formats, as described on the download page↗. I'm going to use the &quot;OVA with virtual hardware v13 arm64&quot; version so I'll kick off that download of photon_uefi.ova. I'm actually going to download that file straight to my deb01 Linux VM:
wget https://packages.vmware.com/photon/4.0/Rev2/ova/photon_uefi.ova # [tl! .cmd] and then spawn a quick Python web server to share it out:
python3 -m http.server # [tl! .cmd] Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ... # [tl! .nocopy] That will let me deploy from a resource already inside my lab network instead of transferring the OVA from my laptop. So now I can go back to my vSphere Client and go through the steps to Deploy OVF Template to the new host, and I'll plug in the URL http://deb01.lab.bowdre.net:8000/photon_uefi.ova: I'll name it pho01 and drop it in an appropriate VM folder: And place it on the new Quartz64 host: The rest of the OVF deployment is basically just selecting the default options and clicking through to finish it. And then once it's deployed, I'll go ahead and power on the new VM. Configuring Photon There are just a few things I'll want to configure on this VM before I move on to installing Tailscale, and I'll start out simply by logging in with the remote console.
Default credentials
The default password for Photon's root user is changeme. You'll be forced to change that at first login.
Now that I'm in, I'll set the hostname appropriately:
hostnamectl set-hostname pho01 # [tl! .cmd_root] For now, the VM pulled an IP from DHCP but I would like to configure that statically instead. To do that, I'll create a new interface file:
cat &gt; /etc/systemd/network/10-static-en.network &lt;&lt; &#34;EOF&#34; # [tl! .cmd_root] [Match] Name = eth0 [Network] Address = 192.168.1.17/24 Gateway = 192.168.1.1 DNS = 192.168.1.5 DHCP = no IPForward = yes EOF chmod 644 /etc/systemd/network/10-static-en.network # [tl! .cmd_root:1] systemctl restart systemd-networkd I'm including IPForward = yes to enable IP forwarding↗ for Tailscale.
With networking sorted, it's probably a good idea to check for and apply any available updates:
tdnf update -y # [tl! .cmd_root] I'll also go ahead and create a normal user account (with sudo privileges) for me to use:
useradd -G wheel -m john # [tl! .cmd_root:1] passwd john Now I can use SSH to connect to the VM and ditch the web console:
ssh pho01.lab.bowdre.net # [tl! .cmd] Password: # [tl! .nocopy] sudo whoami # [tl! .cmd] # [tl! .nocopy:start] We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility. [sudo] password for john root # [tl! .nocopy:end] Looking good! I'll now move on to the justification4 for this entire exercise:
Installing Tailscale If I weren't doing this on hard mode, I could use Tailscale's install script↗ like I do on every other Linux system. Hard mode is what I do though, and the installer doesn't directly support Photon OS. I'll instead consult the manual install instructions↗ which tell me to download the appropriate binaries from https://pkgs.tailscale.com/stable/#static↗. So I'll grab the link for the latest arm64 build and pull the down to the VM:
curl https://pkgs.tailscale.com/stable/tailscale_1.22.2_arm64.tgz --output tailscale_arm64.tgz # [tl! .cmd] Then I can unpack it:
sudo tdnf install tar # [tl! .cmd:2] tar xvf tailscale_arm64.tgz cd tailscale_1.22.2_arm64/ So I've got the tailscale and tailscaled binaries as well as some sample service configs in the systemd directory:
ls # [tl! .cmd] total 32288 # [tl! .nocopy:4] drwxr-x--- 2 john users 4096 Mar 18 02:44 systemd -rwxr-x--- 1 john users 12187139 Mar 18 02:44 tailscale -rwxr-x--- 1 john users 20866538 Mar 18 02:44 tailscaled ls ./systemd # [tl! .cmd] total 8 # [tl! .nocopy:2] -rw-r----- 1 john users 287 Mar 18 02:44 tailscaled.defaults -rw-r----- 1 john users 674 Mar 18 02:44 tailscaled.service Dealing with the binaries is straight-forward. I'll drop them into /usr/bin/ and /usr/sbin/ (respectively) and set the file permissions:
sudo install -m 755 tailscale /usr/bin/ # [tl! .cmd:1] sudo install -m 755 tailscaled /usr/sbin/ Then I'll descend to the systemd folder and see what's up:
cd systemd/ # [tl! .cmd:1] cat tailscaled.defaults # Set the port to listen on for incoming VPN packets. [tl! .nocopy:8] # Remote nodes will automatically be informed about the new port number, # but you might want to configure this in order to set external firewall # settings. PORT=&#34;41641&#34; # Extra flags you might want to pass to tailscaled. FLAGS=&#34;&#34; cat tailscaled.service # [tl! .cmd] [Unit] # [tl! .nocopy:start] Description=Tailscale node agent Documentation=https://tailscale.com/kb/ Wants=network-pre.target After=network-pre.target NetworkManager.service systemd-resolved.service [Service] EnvironmentFile=/etc/default/tailscaled ExecStartPre=/usr/sbin/tailscaled --cleanup ExecStart=/usr/sbin/tailscaled --state=/var/lib/tailscale/tailscaled.state --socket=/run/tailscale/tailscaled.sock --port $PORT $FLAGS ExecStopPost=/usr/sbin/tailscaled --cleanup Restart=on-failure RuntimeDirectory=tailscale RuntimeDirectoryMode=0755 StateDirectory=tailscale StateDirectoryMode=0700 CacheDirectory=tailscale CacheDirectoryMode=0750 Type=notify [Install] WantedBy=multi-user.target # [tl! .nocopy:end] tailscaled.defaults contains the default configuration that will be referenced by the service, and tailscaled.service tells me that it expects to find it at /etc/defaults/tailscaled. So I'll copy it there and set the perms:
sudo install -m 644 tailscaled.defaults /etc/defaults/tailscaled # [tl! .cmd] tailscaled.service will get dropped in /usr/lib/systemd/system/:
sudo install -m 644 tailscaled.service /usr/lib/systemd/system/ # [tl! .cmd] Then I'll enable the service and start it:
sudo systemctl enable tailscaled.service # [tl! .cmd:1] sudo systemctl start tailscaled.service And finally log in to Tailscale, including my tag:home tag for ACL purposes and a route advertisement for my home network so that my other Tailscale nodes can use this one to access other devices as well:
sudo tailscale up --advertise-tags &#34;tag:home&#34; --advertise-route &#34;192.168.1.0/24&#34; # [tl! .cmd] That will return a URL I can use to authenticate, and I'll then able to to view and manage the new Tailscale node from the login.tailscale.com admin portal: You might remember from last time that the &quot;Subnets (!)&quot; label indicates that this node is attempting to advertise a subnet route but that route hasn't yet been accepted through the admin portal. You may also remember that the 192.168.1.0/24 subnet is already being advertised by my vyos node:5 Things could potentially get messy if I have two nodes advertising routes for the same subnet6 so I'm going to use the admin portal to disable that route on vyos before enabling it for pho01. I'll let vyos continue to route the 172.16.0.0/16 subnet (which only exists inside the NUC's vSphere environment after all) and it can continue to function as an Exit Node as well. Now I can remotely access the VM (and thus my homelab!) from any of my other Tailscale-enrolled devices!
Conclusion I actually received the Quartz64 waay back on March 2nd, and it's taken me until this week to get all the pieces in place and working the way I wanted.
As is so often the case, a lot of time and effort would have been saved if I had RTFM'd7 before diving in to the deep end. I definitely hadn't anticipated all the limitations that would come with the Quartz64 SBC before ordering mine. Now that it's done, though, I'm pretty pleased with the setup, and I feel like I learned quite a bit along the way. I keep reminding myself that this is still a very new hardware platform. I'm excited to see how things improve with future development efforts.
Fixed in the v1.10 release.&#160;&#x21a9;&#xfe0e;&#160;&#x21a9;&#xfe0e;&#160;&#x21a9;&#xfe0e;
Jared McNeill, the maintainer of the firmware image I'm using just pushed a commit↗ which sounds like it may address this flaky USB3 issue but that was after I had gotten everything else working as described below. I'll check that out once a new release gets published.&#160;&#x21a9;&#xfe0e;
Hosts love workloads.&#160;&#x21a9;&#xfe0e;
Entirely arbitrary and fabricated justification.&#160;&#x21a9;&#xfe0e;
The Tailscale add-on for Home Assistant↗ also tries to advertise its subnets by default, but I leave that disabled in the admin portal as well.&#160;&#x21a9;&#xfe0e;
Tailscale does offer a subnet router failover feature↗ but it is only available starting on the Business ($15/month) plan↗ and not the $48/year Personal Pro plan that I'm using.&#160;&#x21a9;&#xfe0e;
Read The Friendly Manual. Yeah.&#160;&#x21a9;&#xfe0e;
`,description:"Getting started with the experimental ESXi Arm Edition fling to run a VMware hypervisor on the PINE64 Quartz64 single-board computer, and installing a Tailscale node on Photon OS to facilitate improved remote access to my home network.",url:"https://runtimeterror.dev/esxi-arm-on-quartz64/"},"https://runtimeterror.dev/powershell-download-web-folder-contents/":{title:"Download Web Folder Contents with Powershell (`wget -r` replacement)",tags:["powershell","windows"],content:`We've been working lately to use HashiCorp Packer↗ to standardize and automate our VM template builds, and we found a need to pull in all of the contents of a specific directory on an internal web server. This would be pretty simple for Linux systems using wget -r, but we needed to find another solution for our Windows builds.
A coworker and I cobbled together a quick PowerShell solution which will download the files within a specified web URL to a designated directory (without recreating the nested folder structure):
# torchlight! {&#34;lineNumbers&#34;: true} $outputdir = &#39;C:\\Scripts\\Download\\&#39; $url = &#39;https://win01.lab.bowdre.net/stuff/files/&#39; # enable TLS 1.2 and TLS 1.1 protocols [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12, [Net.SecurityProtocolType]::Tls11 $WebResponse = Invoke-WebRequest -Uri $url # get the list of links, skip the first one (&#34;[To Parent Directory]&#34;) and download the files $WebResponse.Links | Select-Object -ExpandProperty href -Skip 1 | ForEach-Object { $fileName = $_.ToString().Split(&#39;/&#39;)[-1] # &#39;filename.ext&#39; $filePath = Join-Path -Path $outputdir -ChildPath $fileName # &#39;C:\\Scripts\\Download\\filename.ext&#39; $baseUrl = $url.split(&#39;/&#39;) # [&#39;https&#39;, &#39;&#39;, &#39;win01.lab.bowdre.net&#39;, &#39;stuff&#39;, &#39;files&#39;] $baseUrl = $baseUrl[0,2] -join &#39;//&#39; # &#39;https://win01.lab.bowdre.net&#39; $fileUrl = &#39;{0}{1}&#39; -f $baseUrl.TrimEnd(&#39;/&#39;), $_ # &#39;https://win01.lab.bowdre.net/stuff/files/filename.ext&#39; Invoke-WebRequest -Uri $fileUrl -OutFile $filePath } The latest version of this script will be found on GitHub↗.
`,description:"Using PowerShell to retrieve the files stored in a web directory when `wget` isn't an option.",url:"https://runtimeterror.dev/powershell-download-web-folder-contents/"},"https://runtimeterror.dev/ldaps-authentication-tanzu-community-edition/":{title:"Active Directory authentication in Tanzu Community Edition",tags:["vmware","kubernetes","tanzu","activedirectory","certs","cluster","containers"],content:`Not long ago, I deployed a Tanzu Community Edition Kubernetes cluster in my homelab, and then I fumbled through figuring out how to log into it from a different device than the one I'd used for deploying the cluster from the tanzu cli. That setup works great for playing with Kubernetes in my homelab but I'd love to do some Kubernetes with my team at work and I really need the ability to authenticate multiple users with domain credentials for that.
The TCE team has created a rather detailed guide↗ for using the Pinniped↗ and Dex↗ packages1 to provide LDAPS authentication when TCE is connected with the NSX Advanced Load Balancer. This guide got me most of the way toward a working solution but I had to make some tweaks along the way (particularly since I'm not using NSX-ALB). I took notes as I worked through it, though, so here I'll share what it actually took to make this work in my environment.
Prequisite In order to put the &quot;Secure&quot; in LDAPS, I need to make sure my Active Directory domain controller is configured for that, and that means also creating a Certificate Authority for issuing certificates. I followed the steps here↗ to get this set up in my homelab. I can then point my browser to http://win01.lab.bowdre.net/certsrv/certcarc.asp to download the base64-encoded CA certificate since I'll need that later. With that sorted, I'm ready to move on to creating a new TCE cluster with an LDAPS identity provider configured.
Cluster creation The cluster deployment steps are very similar to what I did last time so I won't repeat all those instructions here. The only difference is that this time I don't skip past the Identity Management screen; instead, I'll select the LDAPS radio button and get ready to fill out the form.
Identity management configuration LDAPS Identity Management Source
Field Value Notes LDAPS Endpoint win01.lab.bowdre.net:636 LDAPS interface of my AD DC BIND DN CN=LDAP Bind,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net DN of an account with LDAP read permissions BIND Password ******* Password for that account User Search Attributes
Field Value Notes Base DN OU=LAB,DC=lab,DC=bowdre,DC=net DN for the top-level OU containing my users Filter objectClass=(person) Username sAMAccountName I want to auth as john rather than [email protected] (userPrincipalName) Group Search Attributes
Field Value Notes Base DN OU=LAB,DC=lab,DC=bowdre,DC=net DN for OU containing my users Filter (objectClass=group) Name Attribute cn Common Name User Attribute DN Distinguished Name (capitalization matters!) Group Attribute member:1.2.840.113556.1.4.1941: Used to enumerate which groups a user is a member of2 And I'll copy the contents of the base64-encoded CA certificate I downloaded earlier and paste them into the Root CA Certificate field.
Before moving on, I can use the Verify LDAP Configuration button to quickly confirm that the connection is set up correctly. (I discovered that it doesn't honor the attribute selections made on the previous screen so I have to search for my Common Name (John Bowdre) instead of my username (john), but this at least lets me verify that the connection and certificate are working correctly.) I can then click through the rest of the wizard but (as before) I'll stop on the final review page. Despite entering everything correctly in the wizard I'll actually need to make a small edit to the deployment configuration YAML so I make a note of its location and copy it to a file called tce-mgmt-deploy.yaml in my working directory so that I can take a look. Editing the cluster spec Remember that awkward member:1.2.840.113556.1.4.1941: attribute from earlier? Here's how it looks within the TCE cluster-defining YAML:
# torchlight! {&#34;lineNumbers&#34;: true} LDAP_GROUP_SEARCH_BASE_DN: OU=LAB,DC=lab,DC=bowdre,DC=net LDAP_GROUP_SEARCH_FILTER: (objectClass=group) LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: &#39;member:1.2.840.113556.1.4.1941:&#39; LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN That : at the end of the line will cause problems down the road - specifically when the deployment process creates the dex app which handles the actual LDAPS authentication piece. Cumulative hours of troubleshooting (and learning!) eventually revealed to me that something along the way had choked on that trailing colon and inserted this into the dex configuration:
# torchlight! {&#34;lineNumbers&#34;: true} userMatchers: - userAttr: DN groupAttr: member:1.2.840.113556.1.4.1941: null # [tl! focus] It should look like this instead:
# torchlight! {&#34;lineNumbers&#34;: true} userMatchers: - userAttr: DN groupAttr: &#39;member:1.2.840.113556.1.4.1941:&#39; # [tl! focus] That error prevents dex from starting correctly so the authentication would never work. I eventually figured out that using the | character to define the attribute as a literal scalar↗ would help to get around this issue so I changed the cluster YAML to look like this:
# torchlight! {&#34;lineNumbers&#34;: true} LDAP_GROUP_SEARCH_BASE_DN: OU=LAB,DC=lab,DC=bowdre,DC=net LDAP_GROUP_SEARCH_FILTER: (objectClass=group) LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: | # [tl! focus:1] &#39;member:1.2.840.113556.1.4.1941:&#39; LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN Deploying the cluster That's the only thing I need to manually edit so now I can go ahead and create the cluster with:
tanzu management-cluster create tce-mgmt -f tce-mgmt-deploy.yaml # [tl! .cmd] This will probably take 10-15 minutes to deploy so it's a great time to go top off my coffee.
And we're back - and with a friendly success message in the console:
You can now access the management cluster tce-mgmt by running &#39;kubectl config use-context tce-mgmt-admin@tce-mgmt&#39; Management cluster created! You can now create your first workload cluster by running the following: tanzu cluster create [name] -f [file] Some addons might be getting installed! Check their status by running the following: kubectl get apps -A I obediently follow the instructions to switch to the correct context and verify that the addons are all running:
kubectl config use-context tce-mgmt-admin@tce-mgmt # [tl! .cmd] Switched to context &#34;tce-mgmt-admin@tce-mgmt&#34;. # [tl! .nocopy:1] kubectl get apps -A # [tl! .cmd] NAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGE # [tl! .nocopy:start] tkg-system antrea Reconcile succeeded 5m2s 11m tkg-system metrics-server Reconcile succeeded 39s 11m tkg-system pinniped Reconcile succeeded 4m55s 11m tkg-system secretgen-controller Reconcile succeeded 65s 11m tkg-system tanzu-addons-manager Reconcile succeeded 70s 11m tkg-system vsphere-cpi Reconcile succeeded 32s 11m tkg-system vsphere-csi Reconcile succeeded 66s 11m # [tl! .nocopy:end] Post-deployment tasks I've got a TCE cluster now but it's not quite ready for me to authenticate with my AD credentials just yet.
Load Balancer deployment The guide I'm following from the TCE site↗ assumes that I'm using NSX-ALB in my environment, but I'm not. So, as before, I'll need to deploy Scott Rosenberg's kube-vip Carvel package↗:
git clone https://github.com/vrabbi/tkgm-customizations.git # [tl! .cmd:3] cd tkgm-customizations/carvel-packages/kube-vip-package kubectl apply -n tanzu-package-repo-global -f metadata.yml kubectl apply -n tanzu-package-repo-global -f package.yaml cat &lt;&lt; EOF &gt; values.yaml # [tl! .cmd] vip_range: 192.168.1.64-192.168.1.70 EOF tanzu package install kubevip -p kubevip.terasky.com -v 0.3.9 -f values.yaml # [tl! .cmd] Modifying services to use the Load Balancer With the load balancer in place, I can follow the TCE instruction to modify the Pinniped and Dex services to switch from the NodePort type to the LoadBalancer type so they can be easily accessed from outside of the cluster. This process starts by creating a file called pinniped-supervisor-svc-overlay.yaml and pasting in the following overlay manifest:
# torchlight! {&#34;lineNumbers&#34;: true} #@ load(&#34;@ytt:overlay&#34;, &#34;overlay&#34;) #@overlay/match by=overlay.subset({&#34;kind&#34;: &#34;Service&#34;, &#34;metadata&#34;: {&#34;name&#34;: &#34;pinniped-supervisor&#34;, &#34;namespace&#34;: &#34;pinniped-supervisor&#34;}}) --- #@overlay/replace spec: type: LoadBalancer selector: app: pinniped-supervisor ports: - name: https protocol: TCP port: 443 targetPort: 8443 #@ load(&#34;@ytt:overlay&#34;, &#34;overlay&#34;) #@overlay/match by=overlay.subset({&#34;kind&#34;: &#34;Service&#34;, &#34;metadata&#34;: {&#34;name&#34;: &#34;dexsvc&#34;, &#34;namespace&#34;: &#34;tanzu-system-auth&#34;}}), missing_ok=True --- #@overlay/replace spec: type: LoadBalancer selector: app: dex ports: - name: dex protocol: TCP port: 443 targetPort: https This overlay will need to be inserted into the pinniped-addon secret which means that the contents need to be converted to a base64-encoded string:
base64 -w 0 pinniped-supervisor-svc-overlay.yaml # [tl! .cmd] I0AgbG9hZCgi[...]== # [tl! .nocopy] Avoid newlines
The -w 0 / --wrap=0 argument tells base64 to not wrap the encoded lines after a certain number of characters. If you leave this off, the string will get a newline inserted every 76 characters, and those linebreaks would make the string a bit more tricky to work with. Avoid having to clean up the output afterwards by being more specific with the request up front!
I'll copy the resulting base64 string (which is much longer than the truncated form I'm using here), and paste it into the following command to patch the secret (which will be named after the management cluster name so replace the tce-mgmt part as appropriate):
kubectl -n tkg-system patch secret tce-mgmt-pinniped-addon -p &#39;{&#34;data&#34;: {&#34;overlays.yaml&#34;: &#34;I0AgbG9hZCgi[...]==&#34;}}&#39; # [tl! .cmd] secret/tce-mgmt-pinniped-addon patched # [tl! .nocopy] I can watch as the pinniped-supervisor and dexsvc services get updated with the new service type:
kubectl get svc -A -w # [tl! .cmd] NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) # [tl! .nocopy:start] pinniped-supervisor pinniped-supervisor NodePort 100.65.185.82 &lt;none&gt; 443:31234/TCP tanzu-system-auth dexsvc NodePort 100.70.238.106 &lt;none&gt; 5556:30167/TCP tkg-system packaging-api ClusterIP 100.65.185.94 &lt;none&gt; 443/TCP tanzu-system-auth dexsvc LoadBalancer 100.70.238.106 &lt;pending&gt; 443:30167/TCP pinniped-supervisor pinniped-supervisor LoadBalancer 100.65.185.82 &lt;pending&gt; 443:31234/TCP pinniped-supervisor pinniped-supervisor LoadBalancer 100.65.185.82 192.168.1.70 443:31234/TCP tanzu-system-auth dexsvc LoadBalancer 100.70.238.106 192.168.1.64 443:30167/TCP # [tl! .nocopy:end] I'll also need to restart the pinniped-post-deploy-job job to account for the changes I just made; that's accomplished by simply deleting the existing job. After a few minutes a new job will be spawned automagically. I'll just watch for the new job to be created:
kubectl -n pinniped-supervisor delete jobs pinniped-post-deploy-job # [tl! .cmd] job.batch &#34;pinniped-post-deploy-job&#34; deleted # [tl! .nocopy] kubectl get jobs -A -w # [tl! cmd] NAMESPACE NAME COMPLETIONS DURATION AGE # [tl! .nocopy:4] pinniped-supervisor pinniped-post-deploy-job 0/1 0s pinniped-supervisor pinniped-post-deploy-job 0/1 0s pinniped-supervisor pinniped-post-deploy-job 0/1 0s 0s pinniped-supervisor pinniped-post-deploy-job 1/1 9s 9s Authenticating Right now, I've got all the necessary components to support LDAPS authentication with my TCE management cluster but I haven't done anything yet to actually define who should have what level of access. To do that, I'll create a ClusterRoleBinding.
I'll toss this into a file I'll call tanzu-admins-crb.yaml:
# torchlight! {&#34;lineNumbers&#34;: true} kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tanzu-admins subjects: - kind: Group name: Tanzu-Admins apiGroup: roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io I have a group in Active Directory called Tanzu-Admins which contains a group called vRA-Admins, and that group contains my user account (john). It's a roundabout way of granting access for a single user in this case but it should help to confirm that nested group memberships are being enumerated properly.
Once applied, users within that group will be granted the cluster-admin role3.
Let's do it:
kubectl apply -f tanzu-admins-crb.yaml # [tl! .cmd] clusterrolebinding.rbac.authorization.k8s.io/tanzu-admins created # [tl! .nocopy] Thus far, I've been using the default administrator context to interact with the cluster. Now it's time to switch to the non-admin context:
tanzu management-cluster kubeconfig get # [tl! .cmd] You can now access the cluster by running &#39;kubectl config use-context tanzu-cli-tce-mgmt@tce-mgmt&#39; # [tl! .nocopy:1] kubectl config use-context tanzu-cli-tce-mgmt@tce-mgmt # [tl! .cmd] Switched to context &#34;tanzu-cli-tce-mgmt@tce-mgmt&#34;. # [tl! .nocopy] After assuming the non-admin context, the next time I try to interact with the cluster it should kick off the LDAPS authentication process. It won't look like anything is happening in the terminal:
kubectl get nodes # [tl! .cmd] # [tl! .nocopy] But it will shortly spawn a browser page prompting me to log in: Doing so successfully will yield: And the kubectl command will return the expected details:
kubectl get nodes # [tl! .cmd] NAME STATUS ROLES AGE VERSION # [tl! .nocopy:2] tce-mgmt-control-plane-v8l8r Ready control-plane,master 29h v1.21.5+vmware.1 tce-mgmt-md-0-847db9ddc-5bwjs Ready &lt;none&gt; 28h v1.21.5+vmware.1 So I've now successfully logged in to the management cluster as a non-admin user with my Active Directory credentials. Excellent!
Sharing access To allow other users to log in this way, I'd need to give them a copy of the non-admin kubeconfig, which I can get by running tanzu management-cluster config get --export-file tce-mgmt-config to export it into a file named tce-mgmt-config. They could use whatever method they like↗ to merge this in with their existing kubeconfig.
Tanzu CLI required
Other users hoping to work with a Tanzu Community Edition cluster will also need to install the Tanzu CLI tool in order to authenticate in this way. Installation instructions can be found here↗.
Deploying a workload cluster At this point, I've only configured authentication for the management cluster - not the workload cluster. The TCE community docs cover what's needed to make this configuration available in the workload cluster as well here↗. As before, I created the deployment YAML for the workload cluster by copying the management cluster's deployment YAML and changing the CLUSTER_NAME and VSPHERE_CONTROL_PLANE_ENDPOINT values accordingly. This time I also deleted all of the LDAP_* and OIDC_* lines, but made sure to preserve the IDENTITY_MANAGEMENT_TYPE: ldap one.
I was then able to deploy the workload cluster with:
tanzu cluster create --file tce-work-deploy.yaml # [tl! .cmd] Validating configuration... # [tl! .nocopy:start] Creating workload cluster &#39;tce-work&#39;... Waiting for cluster to be initialized... cluster control plane is still being initialized: WaitingForControlPlane cluster control plane is still being initialized: ScalingUp Waiting for cluster nodes to be available... Waiting for addons installation... Waiting for packages to be up and running... Workload cluster &#39;tce-work&#39; created # [tl! .nocopy:end] Access the admin context:
tanzu cluster kubeconfig get --admin tce-work # [tl! .cmd] Credentials of cluster &#39;tce-work&#39; have been saved # [tl! .nocopy:2] You can now access the cluster by running &#39;kubectl config use-context tce-work-admin@tce-work&#39; kubectl config use-context tce-work-admin@tce-work # [tl! .cmd] Switched to context &#34;tce-work-admin@tce-work&#34;. # [tl! .nocopy] Apply the same ClusterRoleBinding from before4:
kubectl apply -f tanzu-admins-crb.yaml # [tl! .cmd] clusterrolebinding.rbac.authorization.k8s.io/tanzu-admins created # [tl! .nocopy] And finally switch to the non-admin context and log in with my AD account:
tanzu cluster kubeconfig get tce-work # [tl! .cmd] ℹ You can now access the cluster by running &#39;kubectl config use-context tanzu-cli-tce-work@tce-work&#39; # [tl! .nocopy:1] kubectl config use-context tanzu-cli-tce-work@tce-work # [tl! .cmd] Switched to context &#34;tanzu-cli-tce-work@tce-work&#34;. # [tl! .nocopy:1] kubectl get nodes # [tl! .cmd] NAME STATUS ROLES AGE VERSION # [tl! .nocopy:2] tce-work-control-plane-zts6r Ready control-plane,master 12m v1.21.5+vmware.1 tce-work-md-0-bcfdc4d79-vn9xb Ready &lt;none&gt; 11m v1.21.5+vmware.1 Now I can Do Work! Create DHCP reservations for control plane nodes
VMware points out↗ that it's important to create DHCP reservations for the IP addresses which were dynamically assigned to the control plane nodes in both the management and workload clusters so be sure to take care of that before getting too involved in &quot;Work&quot;.
Troubleshooting notes It took me quite a bit of trial and error to get this far and (being a k8s novice) even more time figuring out how to even troubleshoot the problems I was encountering. So here are a few tips that helped me out.
Checking and modifying dex configuration I had a lot of trouble figuring out how to correctly format the member:1.2.840.113556.1.4.1941: attribute in the LDAPS config so that it wouldn't get split into multiple attributes due to the trailing colon - and it took me forever to discover that was even the issue. What eventually did the trick for me was learning that I could look at (and modify!) the configuration for the dex app with:
kubectl -n tanzu-system-auth edit configmaps dex # [tl! .cmd] [...] # [tl! .nocopy:start] groupSearch: baseDN: OU=LAB,DC=lab,DC=bowdre,DC=net filter: (objectClass=group) nameAttr: cn scope: sub userMatchers: - userAttr: DN groupAttr: &#39;member:1.2.840.113556.1.4.1941:&#39; host: win01.lab.bowdre.net:636 [...] # [tl! .nocopy:end] This let me make changes on the fly until I got a working configuration and then work backwards from there to format the initial input correctly.
Reviewing dex logs Authentication attempts (at least on the LDAPS side of things) will show up in the logs for the dex pod running in the tanzu-system-auth namespace. This is a great place to look to see if the user isn't being found, credentials are invalid, or the groups aren't being enumerated correctly:
kubectl -n tanzu-system-auth get pods # [tl! .cmd] NAME READY STATUS RESTARTS AGE # [tl! .nocopy:2] dex-7bf4f5d4d9-k4jfl 1/1 Running 0 40h kubectl -n tanzu-system-auth logs dex-7bf4f5d4d9-k4jfl # [tl! .cmd] # no such user # [tl! .nocopy:start] {&#34;level&#34;:&#34;info&#34;,&#34;msg&#34;:&#34;performing ldap search OU=LAB,DC=lab,DC=bowdre,DC=net sub (\\u0026(objectClass=person)(sAMAccountName=johnny))&#34;,&#34;time&#34;:&#34;2022-03-06T22:29:57Z&#34;} {&#34;level&#34;:&#34;error&#34;,&#34;msg&#34;:&#34;ldap: no results returned for filter: \\&#34;(\\u0026(objectClass=person)(sAMAccountName=johnny))\\&#34;&#34;,&#34;time&#34;:&#34;2022-03-06T22:29:57Z&#34;} #invalid password {&#34;level&#34;:&#34;info&#34;,&#34;msg&#34;:&#34;performing ldap search OU=LAB,DC=lab,DC=bowdre,DC=net sub (\\u0026(objectClass=person)(sAMAccountName=john))&#34;,&#34;time&#34;:&#34;2022-03-06T22:30:45Z&#34;} {&#34;level&#34;:&#34;info&#34;,&#34;msg&#34;:&#34;username \\&#34;john\\&#34; mapped to entry CN=John Bowdre,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net&#34;,&#34;time&#34;:&#34;2022-03-06T22:30:45Z&#34;} {&#34;level&#34;:&#34;error&#34;,&#34;msg&#34;:&#34;ldap: invalid password for user \\&#34;CN=John Bowdre,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net\\&#34;&#34;,&#34;time&#34;:&#34;2022-03-06T22:30:45Z&#34;} # successful login {&#34;level&#34;:&#34;info&#34;,&#34;msg&#34;:&#34;performing ldap search OU=LAB,DC=lab,DC=bowdre,DC=net sub (\\u0026(objectClass=person)(sAMAccountName=john))&#34;,&#34;time&#34;:&#34;2022-03-06T22:31:21Z&#34;} {&#34;level&#34;:&#34;info&#34;,&#34;msg&#34;:&#34;username \\&#34;john\\&#34; mapped to entry CN=John Bowdre,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net&#34;,&#34;time&#34;:&#34;2022-03-06T22:31:21Z&#34;} {&#34;level&#34;:&#34;info&#34;,&#34;msg&#34;:&#34;performing ldap search OU=LAB,DC=lab,DC=bowdre,DC=net sub (\\u0026(objectClass=group)(member:1.2.840.113556.1.4.1941:=CN=John Bowdre,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net))&#34;,&#34;time&#34;:&#34;2022-03-06T22:31:21Z&#34;} {&#34;level&#34;:&#34;info&#34;,&#34;msg&#34;:&#34;login successful: connector \\&#34;ldap\\&#34;, username=\\&#34;john\\&#34;, preferred_username=\\&#34;\\&#34;, email=\\&#34;CN=John Bowdre,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net\\&#34;, groups=[\\&#34;vRA-Admins\\&#34; \\&#34;Tanzu-Admins\\&#34;]&#34;,&#34;time&#34;:&#34;2022-03-06T22:31:21Z&#34;} # [tl! .nocopy:end] Clearing pinniped sessions I couldn't figure out an elegant way to log out so that I could try authenticating as a different user, but I did discover that information about authenticated sessions get stored in ~/.config/tanzu/pinniped/sessions.yaml. The sessions expired after a while but until that happens I'm able to keep on interacting with kubectl - and not given an option to re-authenticate even if I wanted to.
So in lieu of a handy logout option, I was able to remove the cached sessions by deleting the file:
rm ~/.config/tanzu/pinniped/sessions.yaml # [tl! .cmd] That let me use kubectl get nodes to trigger the authentication prompt again.
Conclusion So this is a pretty basic walkthrough of how I set up my Tanzu Community Edition Kubernetes clusters for Active Directory authentication in my homelab. I feel like I've learned a lot more about TCE specifically and Kubernetes in general through this process, and I'm sure I'll learn more in the future as I keep experimenting with the setup.
Per VMware, &quot;Pinniped provides the authentication service, which uses Dex to connect to identity providers such as Active Directory.&quot;&#160;&#x21a9;&#xfe0e;
Setting this to just member would work, but it wouldn't return any nested group memberships. I struggled with this for a long while until I came across a post from Brian Ragazzi↗ which mentioned using member:1.2.840.113556.1.4.1941: instead. This leverages a special LDAP matching rule OID↗ to expand the nested groups.&#160;&#x21a9;&#xfe0e;
You can read up on some other default user-facing roles here↗.&#160;&#x21a9;&#xfe0e;
Or a different one. In reality, you probably don't want the same users having the same levels of access on both the management and workload clusters. But I stuck with just the one here for now.&#160;&#x21a9;&#xfe0e;
`,description:"Deploying and configuring a Tanzu Community Edition Kubernetes cluster to support authentication with Active Directory users and groups by using pinniped and dex",url:"https://runtimeterror.dev/ldaps-authentication-tanzu-community-edition/"},"https://runtimeterror.dev/nessus-essentials-on-tanzu-community-edition/":{title:"Nessus Essentials on Tanzu Community Edition",tags:["vmware","kubernetes","tanzu","containers","security"],content:`Now that VMware has released↗ vCenter 7.0U3c↗ to resolve the Log4Shell vulnerabilities I thought it might be fun to run a security scan against the upgraded VCSA in my homelab to see how it looks. Of course, I don't actually have a security scanner in that environment so I'll need to deploy one.
I start off by heading to tenable.com/products/nessus/nessus-essentials↗ to register for a (free!) license key which will let me scan up to 16 hosts. I'll receive the key and download link in an email, but I'm not actually going to use that link to download the Nessus binary. I've got this shiny-and-new Tanzu Community Edition Kubernetes cluster that could use some more real workloads so I'll instead opt for the Docker version↗.
Tenable provides an example docker-compose.yml↗ to make it easy to get started:
# torchlight! {&#34;lineNumbers&#34;: true} version: &#39;3.1&#39; services: nessus: image: tenableofficial/nessus restart: always container_name: nessus environment: USERNAME: &lt;user&gt; PASSWORD: &lt;password&gt; ACTIVATION_CODE: &lt;code&gt; ports: - 8834:8834 I can use that knowledge to craft something I can deploy on Kubernetes:
# torchlight! {&#34;lineNumbers&#34;: true} apiVersion: v1 kind: Service metadata: name: nessus labels: app: nessus spec: type: LoadBalancer ports: - name: nessus-web port: 443 protocol: TCP targetPort: 8834 selector: app: nessus --- apiVersion: apps/v1 kind: Deployment metadata: name: nessus spec: selector: matchLabels: app: nessus replicas: 1 template: metadata: labels: app: nessus spec: containers: - name: nessus image: tenableofficial/nessus env: - name: ACTIVATION_CODE value: &#34;ABCD-1234-EFGH-5678-IJKL&#34; - name: USERNAME value: &#34;admin&#34; - name: PASSWORD value: &#34;VMware1!&#34; ports: - name: nessus-web containerPort: 8834 Note that I'm configuring the LoadBalancer to listen on port 443 and route traffic to the pod on port 8834 so that I don't have to remember to enter an oddball port number when I want to connect to the web interface.
And now I can just apply the file:
kubectl apply -f nessus.yaml # [tl! .cmd] service/nessus created # [tl! .nocopy:1] deployment.apps/nessus created I'll give it a moment or two to deploy and then check on the service to figure out what IP I need to use to connect:
kubectl get svc/nessus # [tl! .cmd] NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # [tl! .nocopy:1] nessus LoadBalancer 100.67.16.51 192.168.1.79 443:31260/TCP 57s I point my browser to https://192.168.1.79 and see that it's a great time for a quick coffee break since it will take a few minutes for Nessus to initialize itself: Eventually that gets replaced with a login screen, where I can authenticate using the username and password specified earlier in the YAML. After logging in, I get prompted to run a discovery scan to identify hosts on the network. There's a note that hosts revealed by the discovery scan will not count against my 16-host limit unless/until I select individual hosts for more detailed scans. That's good to know for future efforts, but for now I'm focused on just scanning my one vCenter server so I dismiss the prompt.
What I am interested in is scanning my vCenter for the Log4Shell vulnerability so I'll hit the friendly blue New Scan button at the top of the Scans page to create my scan. That shows me a list of Scan Templates: I'll scroll down a bit and pick the first Log4Shell template: I plug in a name for the scan and enter my vCenter IP (192.168.1.12) as the lone scan target: There's a note there that I'll also need to include credentials so that the Nessus scanner can log in to the target in order to conduct the scan, so I pop over to the aptly-named Credentials tab to add some SSH credentials. This is just my lab environment so I'll give it the root credentials, but if I were running Nessus in a real environment I'd probably want to use a dedicated user account just for scans. Now I can scroll to the bottom of the page, click the down-arrow next to the Save button and select the Launch option to kick off the scan: That drops me back to the My Scans view where I can see the status of my scan. I'll grab another coffee while I stare at the little green spinny thing. Okay, break's over - and so is the scan! Now I can click on the name of the scan to view the results: And I can drill down into the vulnerability details: This reveals a handful of findings related to old 1.x versions of Log4j (which went EOL in 2015 - yikes!) as well as CVE-2021-44832↗ Remote Code Execution vulnerability (which is resolved in Log4j 2.17.1), but the inclusion of Log4j 2.17.0 in vCenter 7.0U3c was sufficient to close the highly-publicized CVE-2021-44228↗ Log4Shell vulnerability. Hopefully VMware can get these other Log4j vulnerabilities taken care of in another upcoming vCenter release.
So there's that curiosity satisfied, and now I've got a handy new tool to play with in my lab.
`,description:"Deploying the free Nessus Essentials security scanner to a Tanzu Community Edition Kubernetes cluster in my homelab.",url:"https://runtimeterror.dev/nessus-essentials-on-tanzu-community-edition/"},"https://runtimeterror.dev/bulk-import-vsphere-dvportgroups-to-phpipam/":{title:"Bulk Import vSphere dvPortGroups to phpIPAM",tags:["vmware","powercli","python","api","phpipam"],content:`I recently wrote about getting started with VMware's Tanzu Community Edition↗ and deploying phpIPAM↗ as my first real-world Kubernetes workload. Well I've spent much of my time since then working on a script which would help to populate my phpIPAM instance with a list of networks to monitor.
Planning and Exporting The first step in making this work was to figure out which networks I wanted to import. We've got hundreds of different networks in use across our production vSphere environments. I focused only on those which are portgroups on distributed virtual switches since those configurations are pretty standardized (being vCenter constructs instead of configured on individual hosts). These dvPortGroups bear a naming standard which conveys all sorts of useful information, and it's easy and safe to rename any dvPortGroups which don't fit the standard (unlike renaming portgroups on a standard virtual switch).
The standard naming convention is [Site/Description] [Network Address]{/[Mask]}. So the networks (across two virtual datacenters and two dvSwitches) look something like this: Some networks have masks in the name, some don't; and some use an underscore (_) rather than a slash (/) to separate the network from the mask . Most networks correctly include the network address with a 0 in the last octet, but some use an x instead. And the VLANs associated with the networks have a varying number of digits. Consistency can be difficult so these are all things that I had to keep in mind as I worked on a solution which would make a true best effort at importing all of these.
As long as the dvPortGroup names stick to this format I can parse the name to come up with a description as well as the IP space of the network. The dvPortGroup also carries information about the associated VLAN, which is useful information to have. And I can easily export this information with a simple PowerCLI query:
get-vdportgroup | select Name, VlanConfiguration # [tl! .cmd_pwsh] # [tl! .nocopy:start] Name VlanConfiguration ---- ----------------- MGT-Home 192.168.1.0 MGT-Servers 172.16.10.0 VLAN 1610 BOW-Servers 172.16.20.0 VLAN 1620 BOW-Servers 172.16.30.0 VLAN 1630 BOW-Servers 172.16.40.0 VLAN 1640 DRE-Servers 172.16.50.0 VLAN 1650 DRE-Servers 172.16.60.x VLAN 1660 VPOT8-Mgmt 172.20.10.0/27 VLAN 20 VPOT8-Servers 172.20.10.32/27 VLAN 30 VPOT8-Servers 172.20.10.64_26 VLAN 40 # [tl! .nocopy:end] In my homelab, I only have a single vCenter. In production, we've got a handful of vCenters, and each manages the hosts in a given region. So I can use information about which vCenter hosts a dvPortGroup to figure out which region a network is in. When I import this data into phpIPAM, I can use the vCenter name to assign remote scan agents↗ to networks based on the region that they're in. I can also grab information about which virtual datacenter a dvPortGroup lives in, which I'll use for grouping networks into sites or sections.
The vCenter can be found in the Uid property returned by get-vdportgroup:
get-vdportgroup | select Name, VlanConfiguration, Datacenter, Uid # [tl! .cmd_pwsh] # [tl! .nocopy:start] Name VlanConfiguration Datacenter Uid ---- ----------------- ---------- --- MGT-Home 192.168.1.0 Lab /VIServer=lab\\[email protected]:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-27015/ MGT-Servers 172.16.10.0 VLAN 1610 Lab /VIServer=lab\\[email protected]:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-27017/ BOW-Servers 172.16.20.0 VLAN 1620 Lab /VIServer=lab\\[email protected]:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-28010/ BOW-Servers 172.16.30.0 VLAN 1630 Lab /VIServer=lab\\[email protected]:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-28011/ BOW-Servers 172.16.40.0 VLAN 1640 Lab /VIServer=lab\\[email protected]:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-28012/ DRE-Servers 172.16.50.0 VLAN 1650 Lab /VIServer=lab\\[email protected]:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-28013/ DRE-Servers 172.16.60.x VLAN 1660 Lab /VIServer=lab\\[email protected]:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-28014/ VPOT8-Mgmt 172.20.10.0/… VLAN 20 Other Lab /VIServer=lab\\[email protected]:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-35018/ VPOT8-Servers 172.20.10… VLAN 30 Other Lab /VIServer=lab\\[email protected]:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-35019/ VPOT8-Servers 172.20.10… VLAN 40 Other Lab /VIServer=lab\\[email protected]:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-35020/ # [tl! .nocopy:end] It's not pretty, but it'll do the trick. All that's left is to export this data into a handy-dandy CSV-formatted file that I can easily parse for import:
get-vdportgroup | select Name, VlanConfiguration, Datacenter, Uid \` # [tl! .cmd_pwsh] | export-csv -NoTypeInformation ./networks.csv Setting up phpIPAM After deploying a fresh phpIPAM instance on my Tanzu Community Edition Kubernetes cluster, there are a few additional steps needed to enable API access. To start, I log in to my phpIPAM instance and navigate to the Administration &gt; Server Management &gt; phpIPAM Settings page, where I enabled both the Prettify links and API feature settings - making sure to hit the Save button at the bottom of the page once I do so. Then I need to head to the User Management page to create a new user that will be used to authenticate against the API: And finally, I head to the API section to create a new API key with Read/Write permissions: I'm also going to head in to Administration &gt; IP Related Management &gt; Sections and delete the default sample sections so that the inventory will be nice and empty: Script time Well that's enough prep work; now it's time for the Python3 script↗:
# torchlight! {&#34;lineNumbers&#34;: true} # The latest version of this script can be found on Github: # https://github.com/jbowdre/misc-scripts/blob/main/Python/phpipam-bulk-import.py import requests from collections import namedtuple check_cert = True created = 0 remote_agent = False name_to_id = namedtuple(&#39;name_to_id&#39;, [&#39;name&#39;, &#39;id&#39;]) ## for testing only: # from requests.packages.urllib3.exceptions import InsecureRequestWarning # requests.packages.urllib3.disable_warnings(InsecureRequestWarning) # check_cert = False ## Makes sure input fields aren&#39;t blank. def validate_input_is_not_empty(field, prompt): while True: user_input = input(f&#39;\\n{prompt}:\\n&#39;) if len(user_input) == 0: print(f&#39;[ERROR] {field} cannot be empty!&#39;) continue else: return user_input ## Takes in a list of dictionary items, extracts all the unique values for a given key, # and returns a sorted list of those. def get_sorted_list_of_unique_values(key, list_of_dict): valueSet = set(sub[key] for sub in list_of_dict) valueList = list(valueSet) valueList.sort() return valueList ## Match names and IDs def get_id_from_sets(name, sets): return [item.id for item in sets if name == item.name][0] ## Authenticate to phpIPAM endpoint and return an auth token def auth_session(uri, auth): print(f&#39;Authenticating to {uri}...&#39;) try: req = requests.post(f&#39;{uri}/user/&#39;, auth=auth, verify=check_cert) except: raise requests.exceptions.RequestException if req.status_code != 200: print(f&#39;[ERROR] Authentication failure: {req.json()}&#39;) raise requests.exceptions.RequestException token = {&#34;token&#34;: req.json()[&#39;data&#39;][&#39;token&#39;]} print(&#39;\\n[AUTH_SUCCESS] Authenticated successfully!&#39;) return token ## Find or create a remote scan agent for each region (vcenter) def get_agent_sets(uri, token, regions): agent_sets = [] def create_agent_set(uri, token, name): import secrets # generate a random secret to be used for identifying this agent payload = { &#39;name&#39;: name, &#39;type&#39;: &#39;mysql&#39;, &#39;code&#39;: secrets.base64.urlsafe_b64encode(secrets.token_bytes(24)).decode(&#34;utf-8&#34;), &#39;description&#39;: f&#39;Remote scan agent for region {name}&#39; } req = requests.post(f&#39;{uri}/tools/scanagents/&#39;, data=payload, headers=token, verify=check_cert) id = req.json()[&#39;id&#39;] agent_set = name_to_id(name, id) print(f&#39;[AGENT_CREATE] {name} created.&#39;) return agent_set for region in regions: name = regions[region][&#39;name&#39;] req = requests.get(f&#39;{uri}/tools/scanagents/?filter_by=name&amp;filter_value={name}&#39;, headers=token, verify=check_cert) if req.status_code == 200: id = req.json()[&#39;data&#39;][0][&#39;id&#39;] agent_set = name_to_id(name, id) else: agent_set = create_agent_set(uri, token, name) agent_sets.append(agent_set) return agent_sets ## Find or create a section for each virtual datacenter def get_section(uri, token, section, parentSectionId): def create_section(uri, token, section, parentSectionId): payload = { &#39;name&#39;: section, &#39;masterSection&#39;: parentSectionId, &#39;permissions&#39;: &#39;{&#34;2&#34;:&#34;2&#34;}&#39;, &#39;showVLAN&#39;: &#39;1&#39; } req = requests.post(f&#39;{uri}/sections/&#39;, data=payload, headers=token, verify=check_cert) id = req.json()[&#39;id&#39;] print(f&#39;[SECTION_CREATE] Section {section} created.&#39;) return id req = requests.get(f&#39;{uri}/sections/{section}/&#39;, headers=token, verify=check_cert) if req.status_code == 200: id = req.json()[&#39;data&#39;][&#39;id&#39;] else: id = create_section(uri, token, section, parentSectionId) return id ## Find or create VLANs def get_vlan_sets(uri, token, vlans): vlan_sets = [] def create_vlan_set(uri, token, vlan): payload = { &#39;name&#39;: f&#39;VLAN {vlan}&#39;, &#39;number&#39;: vlan } req = requests.post(f&#39;{uri}/vlan/&#39;, data=payload, headers=token, verify=check_cert) id = req.json()[&#39;id&#39;] vlan_set = name_to_id(vlan, id) print(f&#39;[VLAN_CREATE] VLAN {vlan} created.&#39;) return vlan_set for vlan in vlans: if vlan != 0: req = requests.get(f&#39;{uri}/vlan/?filter_by=number&amp;filter_value={vlan}&#39;, headers=token, verify=check_cert) if req.status_code == 200: id = req.json()[&#39;data&#39;][0][&#39;vlanId&#39;] vlan_set = name_to_id(vlan, id) else: vlan_set = create_vlan_set(uri, token, vlan) vlan_sets.append(vlan_set) return vlan_sets ## Find or create nameserver configurations for each region def get_nameserver_sets(uri, token, regions): nameserver_sets = [] def create_nameserver_set(uri, token, name, nameservers): payload = { &#39;name&#39;: name, &#39;namesrv1&#39;: nameservers, &#39;description&#39;: f&#39;Nameserver created for region {name}&#39; } req = requests.post(f&#39;{uri}/tools/nameservers/&#39;, data=payload, headers=token, verify=check_cert) id = req.json()[&#39;id&#39;] nameserver_set = name_to_id(name, id) print(f&#39;[NAMESERVER_CREATE] Nameserver {name} created.&#39;) return nameserver_set for region in regions: name = regions[region][&#39;name&#39;] req = requests.get(f&#39;{uri}/tools/nameservers/?filter_by=name&amp;filter_value={name}&#39;, headers=token, verify=check_cert) if req.status_code == 200: id = req.json()[&#39;data&#39;][0][&#39;id&#39;] nameserver_set = name_to_id(name, id) else: nameserver_set = create_nameserver_set(uri, token, name, regions[region][&#39;nameservers&#39;]) nameserver_sets.append(nameserver_set) return nameserver_sets ## Find or create subnet for each dvPortGroup def create_subnet(uri, token, network): def update_nameserver_permissions(uri, token, network): nameserverId = network[&#39;nameserverId&#39;] sectionId = network[&#39;sectionId&#39;] req = requests.get(f&#39;{uri}/tools/nameservers/{nameserverId}/&#39;, headers=token, verify=check_cert) permissions = req.json()[&#39;data&#39;][&#39;permissions&#39;] permissions = str(permissions).split(&#39;;&#39;) if not sectionId in permissions: permissions.append(sectionId) if &#39;None&#39; in permissions: permissions.remove(&#39;None&#39;) permissions = &#39;;&#39;.join(permissions) payload = { &#39;permissions&#39;: permissions } req = requests.patch(f&#39;{uri}/tools/nameservers/{nameserverId}/&#39;, data=payload, headers=token, verify=check_cert) payload = { &#39;subnet&#39;: network[&#39;subnet&#39;], &#39;mask&#39;: network[&#39;mask&#39;], &#39;description&#39;: network[&#39;name&#39;], &#39;sectionId&#39;: network[&#39;sectionId&#39;], &#39;scanAgent&#39;: network[&#39;agentId&#39;], &#39;nameserverId&#39;: network[&#39;nameserverId&#39;], &#39;vlanId&#39;: network[&#39;vlanId&#39;], &#39;pingSubnet&#39;: &#39;1&#39;, &#39;discoverSubnet&#39;: &#39;1&#39;, &#39;resolveDNS&#39;: &#39;1&#39;, &#39;DNSrecords&#39;: &#39;1&#39; } req = requests.post(f&#39;{uri}/subnets/&#39;, data=payload, headers=token, verify=check_cert) if req.status_code == 201: network[&#39;subnetId&#39;] = req.json()[&#39;id&#39;] update_nameserver_permissions(uri, token, network) print(f&#34;[SUBNET_CREATE] Created subnet {req.json()[&#39;data&#39;]}&#34;) global created created += 1 elif req.status_code == 409: print(f&#34;[SUBNET_EXISTS] Subnet {network[&#39;subnet&#39;]}/{network[&#39;mask&#39;]} already exists.&#34;) else: print(f&#34;[ERROR] Problem creating subnet {network[&#39;name&#39;]}: {req.json()}&#34;) ## Import list of networks from the specified CSV file def import_networks(filepath): print(f&#39;Importing networks from {filepath}...&#39;) import csv import re ipPattern = re.compile(&#39;\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.[0-9xX]{1,3}&#39;) networks = [] with open(filepath) as csv_file: reader = csv.DictReader(csv_file) line_count = 0 for row in reader: network = {} if line_count &gt; 0: if(re.search(ipPattern, row[&#39;Name&#39;])): network[&#39;subnet&#39;] = re.findall(ipPattern, row[&#39;Name&#39;])[0] if network[&#39;subnet&#39;].split(&#39;.&#39;)[-1].lower() == &#39;x&#39;: network[&#39;subnet&#39;] = network[&#39;subnet&#39;].lower().replace(&#39;x&#39;, &#39;0&#39;) network[&#39;name&#39;] = row[&#39;Name&#39;] if &#39;/&#39; in row[&#39;Name&#39;][-3]: network[&#39;mask&#39;] = row[&#39;Name&#39;].split(&#39;/&#39;)[-1] elif &#39;_&#39; in row[&#39;Name&#39;][-3]: network[&#39;mask&#39;] = row[&#39;Name&#39;].split(&#39;_&#39;)[-1] else: network[&#39;mask&#39;] = &#39;24&#39; network[&#39;section&#39;] = row[&#39;Datacenter&#39;] try: network[&#39;vlan&#39;] = int(row[&#39;VlanConfiguration&#39;].split(&#39;VLAN &#39;)[1]) except: network[&#39;vlan&#39;] = 0 network[&#39;vcenter&#39;] = f&#34;{(row[&#39;Uid&#39;].split(&#39;@&#39;))[1].split(&#39;:&#39;)[0].split(&#39;.&#39;)[0]}&#34; networks.append(network) line_count += 1 print(f&#39;Processed {line_count} lines and found:&#39;) return networks def main(): import socket import getpass import argparse from pathlib import Path parser = argparse.ArgumentParser() parser.add_argument(&#34;filepath&#34;, type=Path) # Accept CSV file as an argument to the script or prompt for input if necessary try: p = parser.parse_args() filepath = p.filepath except: # make sure filepath is a path to an actual file print(&#34;&#34;&#34;\\n\\n This script helps to add vSphere networks to phpIPAM for IP address management. It is expected that the vSphere networks are configured as portgroups on distributed virtual switches and named like &#39;[Description] [Subnet IP]{/[mask]}&#39; (ex: &#39;LAB-Servers 192.168.1.0&#39;). The following PowerCLI command can be used to export the networks from vSphere: Get-VDPortgroup | Select Name, Datacenter, VlanConfiguration, Uid | Export-Csv -NoTypeInformation ./networks.csv Subnets added to phpIPAM will be automatically configured for monitoring either using the built-in scan agent (default) or a new remote scan agent for each vCenter. &#34;&#34;&#34;) while True: filepath = Path(validate_input_is_not_empty(&#39;Filepath&#39;, &#39;Path to CSV-formatted export from vCenter&#39;)) if filepath.exists(): break else: print(f&#39;[ERROR] Unable to find file at {filepath.name}.&#39;) continue # get collection of networks to import networks = import_networks(filepath) networkNames = get_sorted_list_of_unique_values(&#39;name&#39;, networks) print(f&#39;\\n- {len(networkNames)} networks:\\n\\t{networkNames}&#39;) vcenters = get_sorted_list_of_unique_values(&#39;vcenter&#39;, networks) print(f&#39;\\n- {len(vcenters)} vCenter servers:\\n\\t{vcenters}&#39;) vlans = get_sorted_list_of_unique_values(&#39;vlan&#39;, networks) print(f&#39;\\n- {len(vlans)} VLANs:\\n\\t{vlans}&#39;) sections = get_sorted_list_of_unique_values(&#39;section&#39;, networks) print(f&#39;\\n- {len(sections)} Datacenters:\\n\\t{sections}&#39;) regions = {} for vcenter in vcenters: nameservers = None name = validate_input_is_not_empty(&#39;Region Name&#39;, f&#39;Region name for vCenter {vcenter}&#39;) for region in regions: if name in regions[region][&#39;name&#39;]: nameservers = regions[region][&#39;nameservers&#39;] if not nameservers: nameservers = validate_input_is_not_empty(&#39;Nameserver IPs&#39;, f&#34;Comma-separated list of nameserver IPs in {name}&#34;) nameservers = nameservers.replace(&#39;,&#39;,&#39;;&#39;).replace(&#39; &#39;,&#39;&#39;) regions[vcenter] = {&#39;name&#39;: name, &#39;nameservers&#39;: nameservers} # make sure hostname resolves while True: hostname = input(&#39;\\nFully-qualified domain name of the phpIPAM host:\\n&#39;) if len(hostname) == 0: print(&#39;[ERROR] Hostname cannot be empty.&#39;) continue try: test = socket.gethostbyname(hostname) except: print(f&#39;[ERROR] Unable to resolve {hostname}.&#39;) continue else: del test break username = validate_input_is_not_empty(&#39;Username&#39;, f&#39;Username with read/write access to {hostname}&#39;) password = getpass.getpass(f&#39;Password for {username}:\\n&#39;) apiAppId = validate_input_is_not_empty(&#39;App ID&#39;, f&#39;App ID for API key (from https://{hostname}/administration/api/)&#39;) agent = input(&#39;\\nUse per-region remote scan agents instead of a single local scanner? (y/N):\\n&#39;) try: if agent.lower()[0] == &#39;y&#39;: global remote_agent remote_agent = True except: pass proceed = input(f&#39;\\n\\nProceed with importing {len(networkNames)} networks to {hostname}? (y/N):\\n&#39;) try: if proceed.lower()[0] == &#39;y&#39;: pass else: import sys sys.exit(&#34;Operation aborted.&#34;) except: import sys sys.exit(&#34;Operation aborted.&#34;) del proceed # assemble variables uri = f&#39;https://{hostname}/api/{apiAppId}&#39; auth = (username, password) # auth to phpIPAM token = auth_session(uri, auth) # create nameserver entries nameserver_sets = get_nameserver_sets(uri, token, regions) vlan_sets = get_vlan_sets(uri, token, vlans) if remote_agent: agent_sets = get_agent_sets(uri, token, regions) # create the networks for network in networks: network[&#39;region&#39;] = regions[network[&#39;vcenter&#39;]][&#39;name&#39;] network[&#39;regionId&#39;] = get_section(uri, token, network[&#39;region&#39;], None) network[&#39;nameserverId&#39;] = get_id_from_sets(network[&#39;region&#39;], nameserver_sets) network[&#39;sectionId&#39;] = get_section(uri, token, network[&#39;section&#39;], network[&#39;regionId&#39;]) if network[&#39;vlan&#39;] == 0: network[&#39;vlanId&#39;] = None else: network[&#39;vlanId&#39;] = get_id_from_sets(network[&#39;vlan&#39;], vlan_sets) if remote_agent: network[&#39;agentId&#39;] = get_id_from_sets(network[&#39;region&#39;], agent_sets) else: network[&#39;agentId&#39;] = &#39;1&#39; create_subnet(uri, token, network) print(f&#39;\\n[FINISH] Created {created} of {len(networks)} networks.&#39;) if __name__ == &#34;__main__&#34;: main() I'll run it and provide the path to the network export CSV file:
python3 phpipam-bulk-import.py ~/networks.csv # [tl! .cmd] The script will print out a little descriptive bit about what sort of networks it's going to try to import and then will straight away start processing the file to identify the networks, vCenters, VLANs, and datacenters which will be imported:
Importing networks from /home/john/networks.csv... Processed 17 lines and found: - 10 networks: [&#39;BOW-Servers 172.16.20.0&#39;, &#39;BOW-Servers 172.16.30.0&#39;, &#39;BOW-Servers 172.16.40.0&#39;, &#39;DRE-Servers 172.16.50.0&#39;, &#39;DRE-Servers 172.16.60.x&#39;, &#39;MGT-Home 192.168.1.0&#39;, &#39;MGT-Servers 172.16.10.0&#39;, &#39;VPOT8-Mgmt 172.20.10.0/27&#39;, &#39;VPOT8-Servers 172.20.10.32/27&#39;, &#39;VPOT8-Servers 172.20.10.64_26&#39;] - 1 vCenter servers: [&#39;vcsa&#39;] - 10 VLANs: [0, 20, 30, 40, 1610, 1620, 1630, 1640, 1650, 1660] - 2 Datacenters: [&#39;Lab&#39;, &#39;Other Lab&#39;] It then starts prompting for the additional details which will be needed:
Region name for vCenter vcsa: Labby Comma-separated list of nameserver IPs in Lab vCenter: 192.168.1.5 Fully-qualified domain name of the phpIPAM host: ipam-k8s.lab.bowdre.net Username with read/write access to ipam-k8s.lab.bowdre.net: api-user Password for api-user: App ID for API key (from https://ipam-k8s.lab.bowdre.net/administration/api/): api-user Use per-region remote scan agents instead of a single local scanner? (y/N): y Up to this point, the script has only been processing data locally, getting things ready for talking to the phpIPAM API. But now, it prompts to confirm that we actually want to do the thing (yes please) and then gets to work:
Proceed with importing 10 networks to ipam-k8s.lab.bowdre.net? (y/N): y Authenticating to https://ipam-k8s.lab.bowdre.net/api/api-user... [AUTH_SUCCESS] Authenticated successfully! [VLAN_CREATE] VLAN 20 created. [VLAN_CREATE] VLAN 30 created. [VLAN_CREATE] VLAN 40 created. [VLAN_CREATE] VLAN 1610 created. [VLAN_CREATE] VLAN 1620 created. [VLAN_CREATE] VLAN 1630 created. [VLAN_CREATE] VLAN 1640 created. [VLAN_CREATE] VLAN 1650 created. [VLAN_CREATE] VLAN 1660 created. [SECTION_CREATE] Section Labby created. [SECTION_CREATE] Section Lab created. [SUBNET_CREATE] Created subnet 192.168.1.0/24 [SUBNET_CREATE] Created subnet 172.16.10.0/24 [SUBNET_CREATE] Created subnet 172.16.20.0/24 [SUBNET_CREATE] Created subnet 172.16.30.0/24 [SUBNET_CREATE] Created subnet 172.16.40.0/24 [SUBNET_CREATE] Created subnet 172.16.50.0/24 [SUBNET_CREATE] Created subnet 172.16.60.0/24 [SECTION_CREATE] Section Other Lab created. [SUBNET_CREATE] Created subnet 172.20.10.0/27 [SUBNET_CREATE] Created subnet 172.20.10.32/27 [SUBNET_CREATE] Created subnet 172.20.10.64/26 [FINISH] Created 10 of 10 networks. Success! Now I can log in to my phpIPAM instance and check out my newly-imported subnets: Even the one with the weird name formatting was parsed and imported correctly: So now phpIPAM knows about the vSphere networks I care about, and it can keep track of which vLAN and nameservers go with which networks. Great! But it still isn't scanning or monitoring those networks, even though I told the script that I wanted to use a remote scan agent. And I can check in the Administration &gt; Server management &gt; Scan agents section of the phpIPAM interface to see my newly-created agent configuration. ... but I haven't actually deployed an agent yet. I'll do that by following the same basic steps described here to spin up my phpipam-agent on Kubernetes, and I'll plug in that automagically-generated code for the IPAM_AGENT_KEY environment variable:
# torchlight! {&#34;lineNumbers&#34;: true} apiVersion: apps/v1 kind: Deployment metadata: name: phpipam-agent spec: selector: matchLabels: app: phpipam-agent replicas: 1 template: metadata: labels: app: phpipam-agent spec: containers: - name: phpipam-agent image: ghcr.io/jbowdre/phpipam-agent:latest env: - name: IPAM_DATABASE_HOST value: &#34;ipam-k8s.lab.bowdre.net&#34; - name: IPAM_DATABASE_NAME value: &#34;phpipam&#34; - name: IPAM_DATABASE_USER value: &#34;phpipam&#34; - name: IPAM_DATABASE_PASS value: &#34;VMware1!&#34; - name: IPAM_DATABASE_PORT value: &#34;3306&#34; - name: IPAM_AGENT_KEY value: &#34;CxtRbR81r1ojVL2epG90JaShxIUBl0bT&#34; - name: IPAM_SCAN_INTERVAL value: &#34;15m&#34; - name: IPAM_RESET_AUTODISCOVER value: &#34;false&#34; - name: IPAM_REMOVE_DHCP value: &#34;false&#34; - name: TZ value: &#34;UTC&#34; I kick it off with a kubectl apply command and check back a few minutes later (after the 15-minute interval defined in the above YAML) to see that it worked, the remote agent scanned like it was supposed to and is reporting IP status back to the phpIPAM database server: I think I've got some more tweaks to do with this environment (why isn't phpIPAM resolving hostnames despite the correct DNS servers getting configured?) but this at least demonstrates a successful proof-of-concept import thanks to my Python script. Sure, I only imported 10 networks here, but I feel like I'm ready to process the several hundred which are available in our production environment now.
And who knows, maybe this script will come in handy for someone else. Until next time!
`,description:"I wrote a Python script to interface with the phpIPAM API and import a large number of networks exported from vSphere for IP management.",url:"https://runtimeterror.dev/bulk-import-vsphere-dvportgroups-to-phpipam/"},"https://runtimeterror.dev/logging-in-tce-cluster-from-new-device/":{title:"Logging in to a Tanzu Community Edition Kubernetes Cluster from a new device",tags:["vmware","kubernetes","tanzu"],content:`When I set up my Tanzu Community Edition environment, I did so from a Linux VM since the containerized Linux environment on my Chromebook doesn't support the kind bootstrap cluster used for the deployment. But now that the Kubernetes cluster is up and running, I'd like to be able to connect to it directly without the aid of a jumpbox. How do I get the appropriate cluster configuration over to my Chromebook?
The Tanzu CLI actually makes that pretty easy - once I figured out the appropriate incantation. I just needed to use the tanzu management-cluster kubeconfig get command on my Linux VM to export the kubeconfig of my management (tce-mgmt) cluster to a file:
tanzu management-cluster kubeconfig get --admin --export-file tce-mgmt-kubeconfig.yaml # [tl! .cmd] I then used scp to pull the file from the VM into my local Linux environment, and proceeded to install kubectl and the tanzu CLI (making sure to also enable shell auto-completion along the way!).
Now I'm ready to import the configuration locally with tanzu login on my Chromebook:
tanzu login --kubeconfig ~/projects/tanzu-homelab/tanzu-setup/tce-mgmt-kubeconfig.yaml \\ # [tl! .cmd] --context tce-mgmt-admin@tce-mgmt --name tce-mgmt ✔ successfully logged in to management cluster using the kubeconfig tce-mgmt # [tl! .nocopy] Use the absolute path
Pass in the full path to the exported kubeconfig file. This will help the Tanzu CLI to load the correct config across future terminal sessions.
Even though that's just importing the management cluster it actually grants access to both the management and workload clusters:
tanzu cluster list # [tl! .cmd] NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN # [tl! .nocopy:2] tce-work default running 1/1 1/1 v1.21.2+vmware.1 &lt;none&gt; dev tanzu cluster get tce-work # [tl! .cmd] NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES # [tl! .nocopy:start] tce-work default running 1/1 1/1 v1.21.2+vmware.1 &lt;none&gt; ℹ Details: NAME READY SEVERITY REASON SINCE MESSAGE /tce-work True 24h ├─ClusterInfrastructure - VSphereCluster/tce-work True 24h ├─ControlPlane - KubeadmControlPlane/tce-work-control-plane True 24h │ └─Machine/tce-work-control-plane-vc2pb True 24h └─Workers └─MachineDeployment/tce-work-md-0 └─Machine/tce-work-md-0-687444b744-crc9q True 24h # [tl! .nocopy:end] tanzu management-cluster get # [tl! .cmd] NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES # [tl! .nocopy:start] tce-mgmt tkg-system running 1/1 1/1 v1.21.2+vmware.1 management Details: NAME READY SEVERITY REASON SINCE MESSAGE /tce-mgmt True 23h ├─ClusterInfrastructure - VSphereCluster/tce-mgmt True 23h ├─ControlPlane - KubeadmControlPlane/tce-mgmt-control-plane True 23h │ └─Machine/tce-mgmt-control-plane-7pwz7 True 23h └─Workers └─MachineDeployment/tce-mgmt-md-0 └─Machine/tce-mgmt-md-0-745b858d44-5llk5 True 23h Providers: NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.23 capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.23 capi-system cluster-api CoreProvider cluster-api v0.3.23 capv-system infrastructure-vsphere InfrastructureProvider vsphere v0.7.10 # [tl! .nocopy:end] And I can then tell kubectl about the two clusters:
tanzu management-cluster kubeconfig get tce-mgmt --admin # [tl! .cmd] Credentials of cluster &#39;tce-mgmt&#39; have been saved # [tl! .nocopy:2] You can now access the cluster by running &#39;kubectl config use-context tce-mgmt-admin@tce-mgmt&#39; tanzu cluster kubeconfig get tce-work --admin # [tl! .cmd] Credentials of cluster &#39;tce-work&#39; have been saved # [tl! .nocopy:1] You can now access the cluster by running &#39;kubectl config use-context tce-work-admin@tce-work&#39; And sure enough, there are my contexts:
kubectl config get-contexts # [tl! .cmd] CURRENT NAME CLUSTER AUTHINFO NAMESPACE # [tl! .nocopy:3] tce-mgmt-admin@tce-mgmt tce-mgmt tce-mgmt-admin * tce-work-admin@tce-work tce-work tce-work-admin kubectl get nodes -o wide # [tl! .cmd] NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME # [tl! .nocopy:2] tce-work-control-plane-vc2pb Ready control-plane,master 23h v1.21.2+vmware.1 192.168.1.132 192.168.1.132 VMware Photon OS/Linux 4.19.198-1.ph3 containerd://1.4.6 tce-work-md-0-687444b744-crc9q Ready &lt;none&gt; 23h v1.21.2+vmware.1 192.168.1.133 192.168.1.133 VMware Photon OS/Linux 4.19.198-1.ph3 containerd://1.4.6 Perfect, now I can get back to Tanzuing from my Chromebook without having to jump through a VM. (And, thanks to Tailscale, I can even access my TCE resources remotely!)
`,description:"The Tanzu Community Edition documentation does a great job of explaining how to authenticate to a newly-deployed cluster at the tail end of the installation steps, but how do you log in from another system once it's set up?",url:"https://runtimeterror.dev/logging-in-tce-cluster-from-new-device/"},"https://runtimeterror.dev/enable-tanzu-cli-auto-completion-bash-zsh/":{title:"Enable Tanzu CLI Auto-Completion in bash and zsh",tags:["vmware","linux","tanzu","kubernetes","shell"],content:`Lately I've been spending some time getting more familiar with VMware's Tanzu Community Edition↗ Kubernetes distribution, but I'm still not quite familiar enough with the tanzu command line. If only there were a better way for me to discover the available commands for a given context and help me type them correctly...
Oh, but there is! You see, one of the available Tanzu commands is tanzu completion [shell], which will spit out the necessary code to generate handy context-based auto-completions appropriate for the shell of your choosing (provided that you choose either bash or zsh, that is).
Running tanzu completion --help will tell you what's needed, and you can just copy/paste the commands appropriate for your shell:
# Bash instructions: ## Load only for current session: source &lt;(tanzu completion bash) ## Load for all new sessions: tanzu completion bash &gt; $HOME/.tanzu/completion.bash.inc printf &#34;\\n# Tanzu shell completion\\nsource &#39;$HOME/.tanzu/completion.bash.inc&#39;\\n&#34; &gt;&gt; $HOME/.bash_profile # Zsh instructions: ## Load only for current session: source &lt;(tanzu completion zsh) ## Load for all new sessions: echo &#34;autoload -U compinit; compinit&#34; &gt;&gt; ~/.zshrc tanzu completion zsh &gt; &#34;\${fpath[1]}/_tanzu&#34; So to get the completions to load automatically whenever you start a bash shell, run:
tanzu completion bash &gt; $HOME/.tanzu/completion.bash.inc # [tl! .cmd:1] printf &#34;\\n# Tanzu shell completion\\nsource &#39;$HOME/.tanzu/completion.bash.inc&#39;\\n&#34; &gt;&gt; $HOME/.bash_profile For a zsh shell, it's:
echo &#34;autoload -U compinit; compinit&#34; &gt;&gt; ~/.zshrc # [tl! .cmd:1] tanzu completion zsh &gt; &#34;\${fpath[1]}/_tanzu&#34; And that's it! The next time you open a shell (or source your relevant profile), you'll be able to [TAB] your way through the Tanzu CLI!
`,description:"How to configure your Linux shell to help you do the Tanzu",url:"https://runtimeterror.dev/enable-tanzu-cli-auto-completion-bash-zsh/"},"https://runtimeterror.dev/powercli-list-linux-vms-and-datacenter-locations/":{title:"Using PowerCLI to list Linux VMs and Datacenter Locations",tags:["vmware","powercli","powershell"],content:`I recently needed to export a list of all the Linux VMs in a rather large vSphere environment spanning multiple vCenters (and the entire globe), and I wanted to include information about which virtual datacenter each VM lived in to make it easier to map VMs to their physical location.
I've got a Connect-vCenters function that I use to quickly log into multiple vCenters at once. That then enables me to run a single query across the entire landscape - but what query? There isn't really a direct way to get datacenter information out of the results generated by Get-VM; I could run an additional Get-Datacenter query against each returned VM object but that doesn't sound very efficient.
What I came up with is using Get-Datacenter to enumerate each virtual datacenter, and then list the VMs matching my query within:
# torchlight! {&#34;lineNumbers&#34;: true} $linuxVms = foreach( $datacenter in ( Get-Datacenter )) { Get-Datacenter $datacenter | Get-VM | Where { $_.ExtensionData.Config.GuestFullName -notmatch &#34;win&#34; -and $_.Name -notmatch &#34;vcls&#34; } | \` Select @{ N=&#34;Datacenter&#34;;E={ $datacenter.Name }}, Name, Notes, @{ N=&#34;Configured OS&#34;;E={ $_.ExtensionData.Config.GuestFullName }}, # OS based on the .vmx configuration @{ N=&#34;Running OS&#34;;E={ $_.Guest.OsFullName }}, # OS as reported by VMware Tools @{ N=&#34;Powered On&#34;;E={ $_.PowerState -eq &#34;PoweredOn&#34; }}, @{ N=&#34;IP Address&#34;;E={ $_.ExtensionData.Guest.IpAddress }} } $linuxVms | Export-Csv -Path ./linuxVms.csv -NoTypeInformation -UseCulture This gave me a CSV export with exactly the data I needed.
`,description:"A quick bit of PowerCLI to generate a report showing Linux VMs and their datacenter locations.",url:"https://runtimeterror.dev/powercli-list-linux-vms-and-datacenter-locations/"},"https://runtimeterror.dev/tanzu-community-edition-k8s-homelab/":{title:"VMware Tanzu Community Edition Kubernetes Platform in a Homelab",tags:["vmware","linux","kubernetes","docker","containers","tanzu","homelab"],content:`Back in October, VMware announced↗ Tanzu Community Edition↗ as way to provide &quot;a full-featured, easy-to-manage Kubernetes platform that’s perfect for users and learners alike.&quot; TCE bundles a bunch of open-source components together in a modular, &quot;batteries included but swappable&quot; way: I've been meaning to brush up on my Kubernetes skills so I thought deploying and using TCE in my self-contained homelab would be a fun and rewarding learning exercise - and it was!
Here's how I did it.
Planning TCE supports several different deployment scenarios and targets. It can be configured as separate Management and Workload Clusters or as a single integrated Standalone Cluster, and deployed to cloud providers like AWS and Azure, on-premise vSphere, or even a local Docker environment1. I'll be using the standard Management + Workload Cluster setup in my on-prem vSphere, so I start by reviewing the Prepare to Deploy a Cluster to vSphere↗ documentation to get an idea of what I'll need.
Looking ahead, part of the installation process creates a local KIND↗ cluster for bootstrapping the Management and Workload clusters. I do most of my home computing (and homelab work) by using the Linux environment available on my Chromebook. Unfortunately I know from past experience that KIND will not work within this environment so I'll be using a Debian 10 VM to do the deployment.
Networking The Kubernetes node VMs will need to be attached to a network with a DHCP server to assign their addresses, and that network will need to be able to talk to vSphere. My router handles DHCP for the range 192.168.1.101-250 so I'll plan on using that.
I'll also need to set aside a few static IPs for this project. These will need to be routable and within the same subnet as the DHCP range, but excluded from that DHCP range.
IP Address Purpose 192.168.1.60 Control plane for Management cluster 192.168.1.61 Control plane for Workload cluster 192.168.1.64 - 192.168.1.80 IP range for Workload load balancer Prerequisites Moving on to the Getting Started↗, I'll need to grab some software before I can actually Get Started.
Kubernetes control plane image I need to download a VMware OVA which can be used for deploying my Kubernetes nodes from the VMWare Customer Connect portal here↗2. There are a few different options available. I'll get the Photon release with the highest Kubernetes version currently available, photon-3-kube-v1.21.2+vmware.1-tkg.2-12816990095845873721.ova.
Once the file is downloaded, I'll log into my vCenter and use the Deploy OVF Template action to deploy a new VM using the OVA. I won't bother booting the machine once deployed but will rename it to k8s-node to make it easier to identify later on and then convert it to a template. Docker I've already got Docker installed on this machine, but if I didn't I would follow the instructions here↗ to get it installed and then follow these instructions↗ to enable management of Docker without root.
I also verify that my install is using cgroup version 1 as version 2 is not currently supported:
docker info | grep -i cgroup # [tl! .cmd] Cgroup Driver: cgroupfs # [tl! .nocopy:1] Cgroup Version: 1 kubectl binary Next up, I'll install kubectl as described here↗ - though the latest version is currently 1.23 and that won't work with the 1.21 control plane node image I downloaded from VMware (kubectl needs to be within one minor version of the control plane). Instead I need to find the latest 1.22 release.
I can look at the releases page on GithHub↗ to see that the latest release for me is 1.22.5. With this newfound knowledge I can follow the Install kubectl binary with curl on Linux↗ instructions to grab that specific version:
curl -sLO https://dl.k8s.io/release/v1.22.5/bin/linux/amd64/kubectl # [tl! .cmd:1] sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl # [tl! .nocopy:2] [sudo] password for john: kubectl version --client # [tl! .cmd] Client Version: version.Info{Major:&#34;1&#34;, Minor:&#34;22&#34;, GitVersion:&#34;v1.22.5&#34;, # [tl! .nocopy:3] GitCommit:&#34;5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e&#34;, GitTreeState:&#34;clean&#34;, BuildDate:&#34;2021-12-16T08:38:33Z&#34;, GoVersion:&#34;go1.16.12&#34;, Compiler:&#34;gc&#34;, Platform:&#34;linux/amd64&#34;} kind binary It's not strictly a requirement, but having the kind executable available will be handy for troubleshooting during the bootstrap process in case anything goes sideways. It can be installed in basically the same was as kubectl:
curl -sLo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64 # [tl! .cmd:2] sudo install -o root -g root -m 0755 kind /usr/local/bin/kind kind version kind v0.11.1 go1.16.5 linux/amd64 # [tl! .nocopy] Tanzu CLI The final bit of required software is the Tanzu CLI, which can be downloaded from the project on GitHub↗.
curl -H &#34;Accept: application/vnd.github.v3.raw&#34; \\ # [tl! .cmd] -L https://api.github.com/repos/vmware-tanzu/community-edition/contents/hack/get-tce-release.sh | \\ bash -s v0.9.1 linux And then unpack it and run the installer:
tar xf tce-linux-amd64-v0.9.1.tar.gz # [tl! .cmd:2] cd tce-linux-amd64-v0.9.1 ./install.sh I can then verify the installation is working correctly:
tanzu version # [tl! .cmd] version: v0.2.1 # [tl! .nocopy:2] buildDate: 2021-09-29 sha: ceaa474 Cluster creation Okay, now it's time for the good stuff - creating some shiny new Tanzu clusters! The Tanzu CLI really does make this very easy to accomplish.
Management cluster I need to create a Management cluster first and I'd like to do that with the UI, so that's as simple as:
tanzu management-cluster create --ui # [tl! .cmd] I should then be able to access the UI by pointing a web browser at http://127.0.0.1:8080... but I'm running this on a VM without a GUI, so I'll need to back up and tell it to bind on 0.0.0.0:8080 so the web installer will be accessible across the network. I can also include --browser none so that the installer doesn't bother with trying to launch a browser locally.
tanzu management-cluster create --ui --bind 0.0.0.0:8080 --browser none # [tl! .cmd] # [tl! .nocopy:2] Validating the pre-requisites... Serving kickstart UI at http://[::]:8080 Now I can point my local browser to my VM and see the UI: And then I can click the button at the bottom left to save my eyes3 before selecting the option to deploy on vSphere. I'll plug in the FQDN of my vCenter and provide a username and password to use to connect to it, then hit the Connect button. That will prompt me to accept the vCenter's certificate thumbprint, and then I'll be able to select the virtual datacenter that I want to use. Finally, I'll paste in the SSH public key4 I'll use for interacting with the cluster.
I click Next and move on to the Management Cluster Settings. This is for a lab environment that's fairly memory-constrained, so I'll pick the single-node Development setup with a small instance type. I'll name the cluster tce-mgmt and stick with the default kube-vip control plane endpoint provider. I plug in the control plane endpoint IP that I'll use for connecting to the cluster and select the small instance type for the worker node type.
I don't have an NSX Advanced Load Balancer or any Metadata to configure so I'll skip past those steps and move on to configuring the Resources. Here I pick to place the Tanzu-related resources in a VM folder named Tanzu, to store their data on my single host's single datastore, and to deploy to the one-host physical-cluster cluster.
Now for the Kubernetes Networking Settings: This bit is actually pretty easy. For Network Name, I select the vSphere network where the 192.168.1.0/24 network I identified earlier lives, d-Home-Mgmt. I leave the service and pod CIDR ranges as default.
I disable the Identity Management option and then pick the k8s-node template I had imported to vSphere earlier. I skip the Tanzu Mission Control piece (since I'm still waiting on access to TMC Starter↗) and click the Review Configuration button at the bottom of the screen to review my selections. See the option at the bottom to copy the CLI command? I'll need to use that since clicking the friendly Deploy button doesn't seem to work while connected to the web server remotely.
tanzu management-cluster create \\ # [tl! .cmd] --file /home/john/.config/tanzu/tkg/clusterconfigs/dr94t3m2on.yaml -v 6 In fact, I'm going to copy that file into my working directory and give it a more descriptive name so that I can re-use it in the future.
cp ~/.config/tanzu/tkg/clusterconfigs/dr94t3m2on.yaml \\ # [tl! .cmd] ~/projects/tanzu-homelab/tce-mgmt.yaml Now I can run the install command:
tanzu management-cluster create --file ./tce-mgmt.yaml -v 6 # [tl! .cmd] After a moment or two of verifying prerequisites, I'm met with a polite offer to enable Tanzu Kubernetes Grid Service in vSphere:
vSphere 7.0 Environment Detected. You have connected to a vSphere 7.0 environment which does not have vSphere with Tanzu enabled. vSphere with Tanzu includes an integrated Tanzu Kubernetes Grid Service which turns a vSphere cluster into a platform for running Kubernetes workloads in dedicated resource pools. Configuring Tanzu Kubernetes Grid Service is done through vSphere HTML5 client. Tanzu Kubernetes Grid Service is the preferred way to consume Tanzu Kubernetes Grid in vSphere 7.0 environments. Alternatively you may deploy a non-integrated Tanzu Kubernetes Grid instance on vSphere 7.0. Note: To skip the prompts and directly deploy a non-integrated Tanzu Kubernetes Grid instance on vSphere 7.0, you can set the &#39;DEPLOY_TKG_ON_VSPHERE7&#39; configuration variable to &#39;true&#39; Do you want to configure vSphere with Tanzu? [y/N]: n Would you like to deploy a non-integrated Tanzu Kubernetes Grid management cluster on vSphere 7.0? [y/N]: y That's not what I'm after in this case, though, so I'll answer with a n and a y to confirm that I want the non-integrated TKG deployment.
And now I go get coffee as it'll take 10-15 minutes for the deployment to complete. Okay, I'm back - and so is my shell prompt! The deployment completed successfully:
Waiting for additional components to be up and running... Waiting for packages to be up and running... Context set for management cluster tce-mgmt as &#39;tce-mgmt-admin@tce-mgmt&#39;. Management cluster created! You can now create your first workload cluster by running the following: tanzu cluster create [name] -f [file] Some addons might be getting installed! Check their status by running the following: kubectl get apps -A I can run that last command to go ahead and verify that the addon installation has completed:
kubectl get apps -A # [tl! .cmd] NAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGE # [tl! .nocopy:5] tkg-system antrea Reconcile succeeded 26s 6m49s tkg-system metrics-server Reconcile succeeded 36s 6m49s tkg-system tanzu-addons-manager Reconcile succeeded 22s 8m54s tkg-system vsphere-cpi Reconcile succeeded 19s 6m50s tkg-system vsphere-csi Reconcile succeeded 36s 6m50s And I can use the Tanzu CLI to get some other details about the new management cluster:
tanzu management-cluster get tce-mgmt # [tl! .cmd] NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES # [tl! .nocopy:start] tce-mgmt tkg-system running 1/1 1/1 v1.21.2+vmware.1 management Details: NAME READY SEVERITY REASON SINCE MESSAGE /tce-mgmt True 40m ├─ClusterInfrastructure - VSphereCluster/tce-mgmt True 41m ├─ControlPlane - KubeadmControlPlane/tce-mgmt-control-plane True 40m │ └─Machine/tce-mgmt-control-plane-xtdnx True 40m └─Workers └─MachineDeployment/tce-mgmt-md-0 └─Machine/tce-mgmt-md-0-745b858d44-4c9vv True 40m Providers: NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.23 capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.23 capi-system cluster-api CoreProvider cluster-api v0.3.23 capv-system infrastructure-vsphere InfrastructureProvider vsphere v0.7.10 # [tl! .nocopy:end] Excellent! Things are looking good so I can move on to create the cluster which will actually run my workloads.
Workload cluster I won't use the UI for this but will instead take a copy of my tce-mgmt.yaml file and adapt it to suit the workload needs (as described here↗).
cp tce-mgmt.yaml tce-work.yaml # [tl! .cmd:1] vi tce-work.yaml I only need to change 2 of the parameters in this file:
CLUSTER_NAME: from tce-mgmt to tce-work VSPHERE_CONTROL_PLANE_ENDPOINT: from 192.168.1.60 to 192.168.1.61 I could change a few others if I wanted to5:
(Optional) CLUSTER_PLAN to change between dev/prod plans independently (Optional) CONTROL_PLANE_MACHINE_COUNT to deploy an increased number of control plane nodes (must but an odd integer) (Optional) WORKER_MACHINE_COUNT to add worker nodes (Optional) NAMESPACE to deploy the cluster in a specific Kubernetes namespace (Optional) OS_NAME and OS_VERSION to use a different machine image for the workload cluster After saving my changes to the tce-work.yaml file, I'm ready to deploy the cluster:
tanzu cluster create --file tce-work.yaml # [tl! .cmd] Validating configuration... # [tl! .nocopy:start] Warning: Pinniped configuration not found. Skipping pinniped configuration in workload cluster. Please refer to the documentation to check if you can configure pinniped on workload cluster manually Creating workload cluster &#39;tce-work&#39;... Waiting for cluster to be initialized... Waiting for cluster nodes to be available... Waiting for addons installation... Waiting for packages to be up and running... Workload cluster &#39;tce-work&#39; created # [tl! .nocopy:end] Right on! I'll use tanzu cluster get to check out the workload cluster:
tanzu cluster get tce-work # [tl! .cmd] NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES # [tl! .nocopy:start] tce-work default running 1/1 1/1 v1.21.2+vmware.1 &lt;none&gt; ℹ Details: NAME READY SEVERITY REASON SINCE MESSAGE /tce-work True 9m31s ├─ClusterInfrastructure - VSphereCluster/tce-work True 10m ├─ControlPlane - KubeadmControlPlane/tce-work-control-plane True 9m31s │ └─Machine/tce-work-control-plane-8km9m True 9m31s └─Workers └─MachineDeployment/tce-work-md-0 └─Machine/tce-work-md-0-687444b744-cck4x True 8m31s # [tl! .nocopy:end] I can also go into vCenter and take a look at the VMs which constitute the two clusters: I've highlighted the two Control Plane nodes. They got their IP addresses assigned by DHCP, but VMware says↗ that I need to create reservations for them to make sure they don't change. So I'll do just that. Excellent, I've got a Tanzu management cluster and a Tanzu workload cluster. What now?
Working with Tanzu If I run kubectl get nodes right now, I'll only get information about the management cluster:
kubectl get nodes # [tl! .cmd] NAME STATUS ROLES AGE VERSION # [tl! .nocopy:2] tce-mgmt-control-plane-xtdnx Ready control-plane,master 18h v1.21.2+vmware.1 tce-mgmt-md-0-745b858d44-4c9vv Ready &lt;none&gt; 17h v1.21.2+vmware.1 Setting the right context To be able to deploy stuff to the workload cluster, I need to tell kubectl how to talk to it. And to do that, I'll first need to use tanzu to capture the cluster's kubeconfig:
tanzu cluster kubeconfig get tce-work --admin # [tl! .cmd] Credentials of cluster &#39;tce-work&#39; have been saved # [tl! .nocopy:1] You can now access the cluster by running &#39;kubectl config use-context tce-work-admin@tce-work&#39; I can now run kubectl config get-contexts and see that I have access to contexts on both management and workload clusters:
kubectl config get-contexts # [tl! .cmd] CURRENT NAME CLUSTER AUTHINFO NAMESPACE # [tl! .nocopy:2] * tce-mgmt-admin@tce-mgmt tce-mgmt tce-mgmt-admin tce-work-admin@tce-work tce-work tce-work-admin And I can switch to the tce-work cluster like so:
kubectl config use-context tce-work-admin@tce-work # [tl! .cmd] Switched to context &#34;tce-work-admin@tce-work&#34;. # [tl! .nocopy] kubectl get nodes # [tl! .cmd] NAME STATUS ROLES AGE VERSION # [tl! .nocopy:2] tce-work-control-plane-8km9m Ready control-plane,master 17h v1.21.2+vmware.1 tce-work-md-0-687444b744-cck4x Ready &lt;none&gt; 17h v1.21.2+vmware.1 There they are!
Deploying the yelb demo app Before I move on to deploying actually useful workloads, I'll start with deploying a quick demo application as described in William Lam's post on Interesting Kubernetes application demos↗. yelb is a web app which consists of a UI front end, application server, database server, and Redis caching service so it's a great little demo to make sure Kubernetes is working correctly.
I can check out the sample deployment that William put together here↗, and then deploy it with:
kubectl create ns yelb # [tl! .cmd] namespace/yelb created # [tl! .nocopy:1] kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb.yaml # [tl! .cmd] service/redis-server created # [tl! .nocopy:start] service/yelb-db created service/yelb-appserver created service/yelb-ui created deployment.apps/yelb-ui created deployment.apps/redis-server created deployment.apps/yelb-db created deployment.apps/yelb-appserver created # [tl! .nocopy:end] kubectl -n yelb get pods # [tl! .cmd] NAME READY STATUS RESTARTS AGE # [tl! .nocopy:4] redis-server-74556bbcb7-r9jqc 1/1 Running 0 10s yelb-appserver-d584bb889-2jspg 1/1 Running 0 10s yelb-db-694586cd78-wb8tt 1/1 Running 0 10s yelb-ui-8f54fd88c-k2dw9 1/1 Running 0 10s Once the app is running, I can point my web browser at it to see it in action. But what IP do I use?
kubectl -n yelb get svc/yelb-ui # [tl! .cmd] NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # [tl! .nocopy:1] yelb-ui NodePort 100.71.228.116 &lt;none&gt; 80:30001/TCP 84s This demo is using a NodePort type service to expose the front end, which means it will be accessible on port 30001 on the node it's running on. I can find that IP by:
kubectl -n yelb describe pod $(kubectl -n yelb get pods | grep yelb-ui | awk &#39;{print $1}&#39;) | grep &#34;Node:&#34; # [tl! .cmd] Node: tce-work-md-0-687444b744-cck4x/192.168.1.145 # [tl! .nocopy] So I can point my browser at http://192.168.1.145:30001 and see the demo: After marveling at my own magnificence6 for a few minutes, I'm ready to move on to something more interesting - but first, I'll just delete the yelb namespace to clean up the work I just did:
kubectl delete ns yelb # [tl! .cmd] namespace &#34;yelb&#34; deleted # [tl! .nocopy] Now let's move on and try to deploy yelb behind a LoadBalancer service so it will get its own IP. William has a deployment spec↗ for that too.
kubectl create ns yelb # [tl! .cmd] namespace/yelb created # [tl! .nocopy:1] kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb-lb.yaml # [tl! .cmd] service/redis-server created # [tl! .nocopy:8] service/yelb-db created service/yelb-appserver created service/yelb-ui created deployment.apps/yelb-ui created deployment.apps/redis-server created deployment.apps/yelb-db created deployment.apps/yelb-appserver created kubectl -n yelb get pods # [tl! .cmd] NAME READY STATUS RESTARTS AGE # [tl! .nocopy:4] redis-server-74556bbcb7-q6l62 1/1 Running 0 7s yelb-appserver-d584bb889-p5qgd 1/1 Running 0 7s yelb-db-694586cd78-hjtn4 1/1 Running 0 7s yelb-ui-8f54fd88c-pm9qw 1/1 Running 0 7s And I can take a look at that service...
kubectl -n yelb get svc/yelb-ui # [tl! .cmd] NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # [tl! .nocopy:1] yelb-ui LoadBalancer 100.67.177.185 &lt;pending&gt; 80:32339/TCP 15s Wait a minute. That external IP is still &lt;pending&gt;. What gives? Oh yeah I need to actually deploy and configure a load balancer before I can balance anything. That's up next.
Deploying kube-vip as a load balancer Fortunately, William Lam wrote up some tips↗ for handling that too. It's based on work by Scott Rosenberg↗. The quick-and-dirty steps needed to make this work are:
git clone https://github.com/vrabbi/tkgm-customizations.git # [tl! .cmd:3] cd tkgm-customizations/carvel-packages/kube-vip-package kubectl apply -n tanzu-package-repo-global -f metadata.yml kubectl apply -n tanzu-package-repo-global -f package.yaml cat &lt;&lt; EOF &gt; values.yaml # [tl! .cmd] vip_range: 192.168.1.64-192.168.1.80 EOF tanzu package install kubevip -p kubevip.terasky.com -v 0.3.9 -f values.yaml # [tl! .cmd] Now I can check out the yelb-ui service again:
kubectl -n yelb get svc/yelb-ui # [tl!.cmd] NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # [tl! .nocopy:1] yelb-ui LoadBalancer 100.67.177.185 192.168.1.65 80:32339/TCP 4h35m And it's got an IP! I can point my browser to http://192.168.1.65 now and see: I'll keep the kube-vip load balancer since it'll come in handy, but I have no further use for yelb:
kubectl delete ns yelb # [tl! .cmd] namespace &#34;yelb&#34; deleted # [tl! .nocopy] Persistent Volume Claims, Storage Classes, and Storage Policies At some point, I'm going to want to make sure that data from my Tanzu workloads stick around persistently - and for that, I'll need to define some storage stuff↗.
First up, I'll add a new tag called tkg-storage-local to the nuchost-local vSphere datastore that I want to use for storing Tanzu volumes: Then I create a new vSphere Storage Policy called tkg-storage-policy which states that data covered by the policy should be placed on the datastore(s) tagged with tkg-storage-local: So that's the vSphere side of things sorted; now to map that back to the Kubernetes side. For that, I'll need to define a Storage Class tied to the vSphere Storage profile so I drop these details into a new file called vsphere-sc.yaml:
# torchlight! {&#34;lineNumbers&#34;: true} kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere provisioner: csi.vsphere.vmware.com parameters: storagePolicyName: tkg-storage-policy And then apply it with :
kubectl apply -f vsphere-sc.yaml # [tl! .cmd] storageclass.storage.k8s.io/vsphere created # [tl! .nocopy] I can test that I can create a Persistent Volume Claim against the new vsphere Storage Class by putting this in a new file called vsphere-pvc.yaml:
# torchlight! {&#34;lineNumbers&#34;: true} apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: name: vsphere-demo-1 name: vsphere-demo-1 spec: accessModes: - ReadWriteOnce storageClassName: vsphere resources: requests: storage: 5Gi And applying it:
kubectl apply -f demo-pvc.yaml # [tl! .cmd] persistentvolumeclaim/vsphere-demo-1 created # [tl! .nocopy] I can see the new claim, and confirm that its status is Bound:
kubectl get pvc # [tl! .cmd] NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE # [tl! .nocopy:1] vsphere-demo-1 Bound pvc-36cc7c01-a1b3-4c1c-ba0d-dff3fd47f93b 5Gi RWO vsphere 4m25s And for bonus points, I can see that the container volume was created on the vSphere side: So that's storage sorted. I'll clean up my test volume before moving on:
kubectl delete -f demo-pvc.yaml # [tl! .cmd] persistentvolumeclaim &#34;vsphere-demo-1&#34; deleted # [tl! .nocopy] A real workload - phpIPAM Demos are all well and good, but how about a real-world deployment to tie it all together? I've been using a phpIPAM instance for assigning static IP addresses for my vRealize Automation deployments, but have only been using it to monitor IP usage within the network ranges to which vRA will provision machines. I recently decided that I'd like to expand phpIPAM's scope so it can keep an eye on all the network ranges within the environment. That's not a big ask in my little self-contained homelab, but having a single system scanning all the ranges of a large production network probably wouldn't scale too well.
Fortunately the phpIPAM project provides a remote scanning agent↗ which can be used for keeping an eye on networks and reporting back to the main phpIPAM server. With this, I could deploy an agent to each region (or multiple agents to a region!) and divide up the network into chunks that each agent would be responsible for scanning. But that's a pretty lightweight task for a single server to manage, and who wants to deal with configuring multiple instances of the same thing? Not this guy.
So I set to work exploring some containerization options, and I found phpipam-docker↗. That would easily replicate my existing setup in a trio of containers (one for the web front-end, one for the database back-end, and one with cron jobs to run scans at regular intervals)... but doesn't provide a remote scan capability. I also found a dockerized phpipam-agent↗, but this one didn't quite meet my needs. It did provide me a base to work off of though so a few days of tinkering↗ resulted in me publishing my first Docker image↗. I've still some work to do before this application stack is fully ready for production but it's at a point where I think it's worth doing a test deploy.
To start, I'll create a new namespace to keep things tidy:
kubectl create ns ipam # [tl! .cmd] namespace/ipam created # [tl! .nocopy] I'm going to wind up with four pods:
phpipam-db for the database back-end phpipam-www for the web front-end phpipam-cron for the local cron jobs, which will be largely but not completely7 replaced by: phpipam-agent for my remote scan agent I'll use each container's original docker-compose configuration and adapt that into something I can deploy on Kubernetes.
phpipam-db The phpIPAM database will live inside a MariaDB container. Here's the relevant bit from docker-compose:
# torchlight! {&#34;lineNumbers&#34;: true} services: phpipam-db: image: mariadb:latest ports: - &#34;3306:3306&#34; environment: - MYSQL_ROOT_PASSWORD=VMware1!VMWare1! volumes: - phpipam-db-data:/var/lib/mysql So it will need a Service exposing the container's port 3306 so that other pods can connect to the database. For my immediate demo, using type: ClusterIP will be sufficient since all the connections will be coming from within the cluster. When I do this for real, it will need to be type: LoadBalancer so that the agent running on a different cluster can connect. And it will need a PersistentVolumeClaim so it can store the database data at /var/lib/mysql. It will also get passed an environment variable to set the initial root password on the database instance (which will be used later during the phpIPAM install to create the initial phpipam database).
It might look like this on the Kubernetes side:
# torchlight! {&#34;lineNumbers&#34;: true} # phpipam-db.yaml apiVersion: v1 kind: Service metadata: name: phpipam-db labels: app: phpipam-db namespace: ipam spec: type: ClusterIP ports: - name: mysql port: 3306 protocol: TCP targetPort: 3306 selector: app: phpipam-db --- apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: name: phpipam-db name: phpipam-db-pvc namespace: ipam spec: accessModes: - ReadWriteOnce storageClassName: vsphere resources: requests: storage: 5Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: phpipam-db namespace: ipam spec: selector: matchLabels: app: phpipam-db replicas: 1 template: metadata: labels: app: phpipam-db spec: containers: - name: phpipam-db image: mariadb:latest env: - name: MYSQL_ROOT_PASSWORD value: &#34;VMware1!VMware1!&#34; ports: - name: mysql containerPort: 3306 volumeMounts: - name: phpipam-db-vol mountPath: /var/lib/mysql volumes: - name: phpipam-db-vol persistentVolumeClaim: claimName: phpipam-db-pvc Moving on:
phpipam-www This is the docker-compose excerpt for the web component:
# torchlight! {&#34;lineNumbers&#34;: true} services: phpipam-web: image: phpipam/phpipam-www:1.5x ports: - &#34;80:80&#34; environment: - TZ=UTC - IPAM_DATABASE_HOST=phpipam-db - IPAM_DATABASE_PASS=VMware1! - IPAM_DATABASE_WEBHOST=% volumes: - phpipam-logo:/phpipam/css/images/logo Based on that, I can see that my phpipam-www pod will need a container running the phpipam/phpipam-www:1.5x image, a Service of type LoadBalancer to expose the web interface on port 80, a PersistentVolumeClaim mounted to /phpipam/css/images/logo, and some environment variables passed in to configure the thing. Note that the IPAM_DATABASE_PASS variable defines the password used for the phpipam user on the database (not the root user referenced earlier), and the IPAM_DATABASE_WEBHOST=% variable will define which hosts that phpipam database user will be able to connect from; setting it to % will make sure that my remote agent can connect to the database even if I don't know where the agent will be running.
Here's how I'd adapt that into a structure that Kubernetes will understand:
# torchlight! {&#34;lineNumbers&#34;: true} # phpipam-www.yaml apiVersion: v1 kind: Service metadata: name: phpipam-www labels: app: phpipam-www namespace: ipam spec: type: LoadBalancer ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: phpipam-www --- apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: name: phpipam-www name: phpipam-www-pvc namespace: ipam spec: accessModes: - ReadWriteOnce storageClassName: vsphere resources: requests: storage: 100Mi --- apiVersion: apps/v1 kind: Deployment metadata: name: phpipam-www namespace: ipam spec: selector: matchLabels: app: phpipam-www replicas: 1 template: metadata: labels: app: phpipam-www spec: containers: # [tl! focus:2] - name: phpipam-www image: phpipam/phpipam-www:1.5x env: - name: TZ value: &#34;UTC&#34; - name: IPAM_DATABASE_HOST value: &#34;phpipam-db&#34; - name: IPAM_DATABASE_PASS value: &#34;VMware1!&#34; - name: IPAM_DATABASE_WEBHOST value: &#34;%&#34; ports: - containerPort: 80 volumeMounts: - name: phpipam-www-vol mountPath: /phpipam/css/images/logo volumes: - name: phpipam-www-vol persistentVolumeClaim: claimName: phpipam-www-pvc phpipam-cron This container has a pretty simple configuration in docker-compose:
# torchlight! {&#34;lineNumbers&#34;: true} services: phpipam-cron: image: phpipam/phpipam-cron:1.5x environment: - TZ=UTC - IPAM_DATABASE_HOST=phpipam-db - IPAM_DATABASE_PASS=VMware1! - SCAN_INTERVAL=1h No exposed ports, no need for persistence - just a base image and a few variables to tell it how to connect to the database and how often to run the scans:
# torchlight! {&#34;lineNumbers&#34;: true} # phpipam-cron.yaml apiVersion: apps/v1 kind: Deployment metadata: name: phpipam-cron namespace: ipam spec: selector: matchLabels: app: phpipam-cron replicas: 1 template: metadata: labels: app: phpipam-cron spec: containers: - name: phpipam-cron image: phpipam/phpipam-cron:1.5x env: - name: IPAM_DATABASE_HOST value: &#34;phpipam-db&#34; - name: IPAM_DATABASE_PASS value: &#34;VMWare1!&#34; - name: SCAN_INTERVAL value: &#34;1h&#34; - name: TZ value: &#34;UTC&#34; phpipam-agent And finally, my remote scan agent. Here's the docker-compose:
# torchlight! {&#34;lineNumbers&#34;: true} services: phpipam-agent: container_name: phpipam-agent restart: unless-stopped image: ghcr.io/jbowdre/phpipam-agent:latest environment: - IPAM_DATABASE_HOST=phpipam-db - IPAM_DATABASE_NAME=phpipam - IPAM_DATABASE_USER=phpipam - IPAM_DATABASE_PASS=VMware1! - IPAM_DATABASE_PORT=3306 - IPAM_AGENT_KEY= - IPAM_SCAN_INTERVAL=5m - IPAM_RESET_AUTODISCOVER=true - IPAM_REMOVE_DHCP=true - TZ=UTC It's got a few additional variables to make it extra-configurable, but still no need for persistence or network exposure. That IPAM_AGENT_KEY variable will need to get populated the appropriate key generated within the new phpIPAM deployment, but we can deal with that later.
For now, here's how I'd tell Kubernetes about it:
# torchlight! {&#34;lineNumbers&#34;: true} # phpipam-agent.yaml apiVersion: apps/v1 kind: Deployment metadata: name: phpipam-agent namespace: ipam spec: selector: matchLabels: app: phpipam-agent replicas: 1 template: metadata: labels: app: phpipam-agent spec: containers: - name: phpipam-agent image: ghcr.io/jbowdre/phpipam-agent:latest env: - name: IPAM_DATABASE_HOST value: &#34;phpipam-db&#34; - name: IPAM_DATABASE_NAME value: &#34;phpipam&#34; - name: IPAM_DATABASE_USER value: &#34;phpipam&#34; - name: IPAM_DATABASE_PASS value: &#34;VMware1!&#34; - name: IPAM_DATABASE_PORT value: &#34;3306&#34; - name: IPAM_AGENT_KEY value: &#34;&#34; - name: IPAM_SCAN_INTERVAL value: &#34;5m&#34; - name: IPAM_RESET_AUTODISCOVER value: &#34;true&#34; - name: IPAM_REMOVE_DHCP value: &#34;true&#34; - name: TZ value: &#34;UTC&#34; Deployment and configuration of phpIPAM I can now go ahead and start deploying these containers, starting with the database one (upon which all the others rely):
kubectl apply -f phpipam-db.yaml # [tl! .cmd] service/phpipam-db created # [tl! .nocopy:2] persistentvolumeclaim/phpipam-db-pvc created deployment.apps/phpipam-db created And the web server:
kubectl apply -f phpipam-www.yaml # [tl! .cmd] service/phpipam-www created # [tl! .nocopy:2] persistentvolumeclaim/phpipam-www-pvc created deployment.apps/phpipam-www created And the cron runner:
kubectl apply -f phpipam-cron.yaml # [tl! .cmd] deployment.apps/phpipam-cron created # [tl! .nocopy] I'll hold off on the agent container for now since I'll need to adjust the configuration slightly after getting phpIPAM set up, but I will go ahead and check out my work so far:
kubectl -n ipam get all # [tl! .cmd] NAME READY STATUS RESTARTS AGE # [tl! .nocopy:start] pod/phpipam-cron-6c994897c4-6rsnp 1/1 Running 0 4m30s pod/phpipam-db-5f4c47d4b9-sb5bd 1/1 Running 0 16m pod/phpipam-www-769c95c68d-94klg 1/1 Running 0 5m59s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/phpipam-db ClusterIP 100.66.194.69 &lt;none&gt; 3306/TCP 16m service/phpipam-www LoadBalancer 100.65.232.238 192.168.1.64 80:31400/TCP 5m59s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/phpipam-cron 1/1 1 1 4m30s deployment.apps/phpipam-db 1/1 1 1 16m deployment.apps/phpipam-www 1/1 1 1 5m59s NAME DESIRED CURRENT READY AGE replicaset.apps/phpipam-cron-6c994897c4 1 1 1 4m30s replicaset.apps/phpipam-db-5f4c47d4b9 1 1 1 16m replicaset.apps/phpipam-www-769c95c68d 1 1 1 5m59s # [tl! .nocopy:end] And I can point my browser to the EXTERNAL-IP associated with the phpipam-www service to see the initial setup page: I'll click the New phpipam installation option to proceed to the next step: I'm all for easy so I'll opt for Automatic database installation, which will prompt me for the credentials of an account with rights to create a new database within the MariaDB instance. I'll enter root and the password I used for the MYSQL_ROOT_PASSWORD variable above: I click the Install database button and I'm then met with a happy success message saying that the phpipam database was successfully created.
And that eventually gets me to the post-install screen, where I set an admin password and proceed to log in: To create a new scan agent, I go to Menu &gt; Administration &gt; Server management &gt; Scan agents. And click the button to create a new one: I'll copy the agent code and plug it into my phpipam-agent.yaml file:
- name: IPAM_AGENT_KEY value: &#34;4DC5GLo-F_35cy7BEPnGn7HivtjP_o-v&#34; And then deploy that:
kubectl apply -f phpipam-agent.yaml # [tl! .cmd] deployment.apps/phpipam-agent created # [tl! .nocopy] The scan agent isn't going to do anything until it's assigned to a subnet though, so now I head to Administration &gt; IP related management &gt; Sections. phpIPAM comes with a few default sections and ranges and such defined so I'll delete those and create a new one that I'll call Lab. Now I can create a new subnet within the Lab section by clicking the Subnets menu, selecting the Lab section, and clicking + Add subnet. I'll define the new subnet as 192.168.1.0/24. Once I enable the option to Check hosts status, I'll then be able to specify my new remote-agent as the scanner for this subnet. It shows the scanner associated with the subnet, but no data yet. I'll need to wait a few minutes for the first scan to kick off (at the five-minute interval I defined in the configuration).
Woah, it actually works!
Conclusion I still need to do more work to the containerized phpIPAM stack ready for production, but I'm feeling pretty good for having deployed a functional demo of it at this point! And working on this was a nice excuse to get a bit more familiar with Tanzu Community Edition specifically, Kubernetes in general, and Docker (I learned a ton while assembling the phpipam-agent image!). I find I always learn more about a new-to-me technology when I have an actual project to do rather than just going through the motions of a lab exercise. Maybe my notes will be useful to you, too.
Yo dawg, I heard you like containers...&#160;&#x21a9;&#xfe0e;
Register here↗ if you don't yet have an account.&#160;&#x21a9;&#xfe0e;
Enabling dark mode is probably the most important part of this process.&#160;&#x21a9;&#xfe0e;
If I didn't already have a key pair to use I would generate one with ssh-keygen -t rsa -b 4096 -C &quot;[email protected]&quot; and add it to my client with ssh-add ~/.ssh/id_rsa.&#160;&#x21a9;&#xfe0e;
I'm not going to, but I totally could.&#160;&#x21a9;&#xfe0e;
Mr. Anderson.&#160;&#x21a9;&#xfe0e;
My phpipam-agent image won't (yet?) do the DNS lookups that phpipam-cron can.&#160;&#x21a9;&#xfe0e;
`,description:"Gaining familiarity with VMware Tanzu Community Edition by deploying phpIPAM on Kubernetes in my homelab",url:"https://runtimeterror.dev/tanzu-community-edition-k8s-homelab/"},"https://runtimeterror.dev/secure-networking-made-simple-with-tailscale/":{title:"Secure Networking Made Simple with Tailscale",tags:["vpn","wireguard","homelab","cloud","linux","networking","security","tailscale"],content:`Not all that long ago, I shared about a somewhat-complicated WireGuard VPN setup that I had started using to replace my previous OpenVPN solution. I raved about WireGuard's speed, security, and flexible (if complex) Cryptokey Routing, but adding and managing peers with WireGuard is a fairly manual (and tedious) process. And while I thought I was pretty clever for using a WireGuard peer in GCP to maintain a secure tunnel into my home network without having to punch holes through my firewall, routing all my traffic through The Cloud wasn't really optimal1.
And then I discovered Tailscale↗, which is built on the WireGuard protocol with an additional control plane on top. It delivers the same high security and high speed but dramatically simplifies the configuration and management of peer devices, and also adds in some other handy features like easy-to-configure Access Control Lists (ACLs)↗ to restrict traffic between peers and a MagicDNS↗ feature to automatically register DNS records for connected devices so you don't have to keep up with their IPs.
There's already a great write-up (from the source!) on How Tailscale Works↗, and it's really worth a read so I won't rehash it fully here. The tl;dr though is that Tailscale makes securely connecting remote systems incredibly easy, and it lets those systems connect with each other directly (&quot;mesh&quot;) rather than needing traffic to go through a single VPN endpoint (&quot;hub-and-spoke&quot;). It uses a centralized coordination server to coordinate the complicated key exchanges needed for all members of a Tailscale network (a &quot;tailnet↗&quot;) to trust each other, and this removes the need for a human to manually edit configuration files on every existing device just to add a new one to the mix. Tailscale also leverages magic :tada:↗ to allow Tailscale nodes to communicate with each other without having to punch holes in firewall configurations or forward ports or anything else tedious and messy. (And in case that the typical NAT traversal techniques don't work out, Tailscale created the Detoured Encrypted Routing Protocol (DERP2) to make sure Tailscale can still function seamlessly even on extremely restrictive networks that block UDP entirely or otherwise interfere with NAT traversal.)
Not a VPN Service
It's a no-brainer solution for remote access, but it's important to note that Tailscale is not a VPN service; it won't allow you to internet anonymously or make it appear like you're connecting from a different country (unless you configure a Tailscale Exit Node hosted somewhere in The Cloud to do just that).
Tailscale's software is open-sourced↗ so you could host your own Tailscale control plane and web front end, but much of the appeal of Tailscale is how easy it is to set up and use. To that end, I'm using the Tailscale-hosted option. Tailscale offers a very generous free Personal tier which supports a single admin user, 20 connected devices, 1 subnet router, plus all of the bells and whistles, and the company also sells Team, Business, and Enterprise plans↗ if you need more users, devices, subnet routers, or additional capabilities3.
Tailscale provides surprisingly-awesome documentation, and the Getting Started with Tailscale↗ article does a great job of showing how easy it is to get up and running with just three steps:
Sign up for an account Add a machine to your network Add another machine to your network (repeat until satisfied) This post will start there but then also expand some of the additional features and capabilities that have me so excited about Tailscale.
Getting started The first step in getting up and running with Tailscale is to sign up at https://login.tailscale.com/start↗. You'll need to use an existing Google, Microsoft, or GitHub account to sign up, which also lets you leverage the 2FA and other security protections already enabled on those accounts.
Once you have a Tailscale account, you're ready to install the Tailscale client. The download page↗ outlines how to install it on various platforms, and also provides a handy-dandy one-liner to install it on Linux:
curl -fsSL https://tailscale.com/install.sh | sh # [tl! .cmd] After the install completes, it will tell you exactly what you need to do next:
Installation complete! Log in to start using Tailscale by running: sudo tailscale up There are also Tailscale apps available for iOS↗ and Android↗ - and the Android app works brilliantly on Chromebooks too!
Basic tailscale up Running sudo tailscale up then reveals the next step:
sudo tailscale up # [tl! .cmd] # [tl! .nocopy:3] To authenticate, visit: https://login.tailscale.com/a/1872939939df I can copy that address into a browser and I'll get prompted to log in to my Tailscale account. And that's it. Tailscale is installed, configured to run as a service, and connected to my Tailscale account. This also creates my tailnet.
That was pretty easy, right? But what about if I can't easily get to a web browser from the terminal session on a certain device? No worries, tailscale up has a flag for that:
sudo tailscale up --qr # [tl! .cmd] That will convert the URL to a QR code that I can scan from my phone.
Advanced tailscale up There are a few additional flags that can be useful under certain situations:
--advertise-exit-node to tell the tailnet that this could be used as an exit node for internet traffic sudo tailscale up --advertise-exit-node # [tl! .cmd] --advertise-routes to let the node perform subnet routing functions to provide connectivity to specified local subnets sudo tailscale up --advertise-routes &#34;192.168.1.0/24,172.16.0.0/16&#34; # [tl! .cmd] --advertise-tags4 to associate the node with certain tags for ACL purposes (like tag:home to identify stuff in my home network and tag:cloud to label external cloud-hosted resources) sudo tailscale up --advertise-tags &#34;tag:cloud&#34; # [tl! .cmd] --hostname to manually specific a hostname to use within the tailnet sudo tailscale up --hostname &#34;tailnode&#34; # [tl! .cmd] --shields-up to block incoming traffic sudo tailscale up --shields-up # [tl! .cmd] These flags can also be combined with each other:
sudo tailscale up --hostname &#34;tailnode&#34; --advertise-exit-node --qr # [tl! .cmd] Sidebar: Tailscale on VyOS Getting Tailscale on my VyOS virtual router was unfortunately a little more involved than leveraging the built-in WireGuard capability. I found the vyos-tailscale↗ project to help with building a customized VyOS installation ISO with the tailscaled daemon added in. I was then able to copy the ISO over to my VyOS instance and install it as if it were a standard upgrade↗. I could then bring up the interface, advertise my home networks, and make it available as an exit node with:
sudo tailscale up --advertise-exit-node --advertise-routes &#34;192.168.1.0/24,172.16.0.0/16&#34; # [tl! .cmd] Other tailscale commands Once there are a few members, I can use the tailscale status command to see a quick overview of the tailnet:
tailscale status # [tl! .cmd] 100.115.115.39 deb01 john@ linux - # [tl! .nocopy:start] 100.118.115.69 ipam john@ linux - 100.116.90.109 johns-iphone john@ iOS - 100.116.31.85 matrix john@ linux - 100.114.140.112 pixel6pro john@ android - 100.94.127.1 pixelbook john@ android - 100.75.110.50 snikket john@ linux - 100.96.24.81 vyos john@ linux - 100.124.116.125 win01 john@ windows - # [tl! .nocopy:end] Without doing any other configuration beyond just installing Tailscale and connecting it to my account, I can now easily connect from any of these devices to any of the other devices using the listed Tailscale IP5. Entering ssh 100.116.31.85 will connect me to my Matrix server.
tailscale ping lets me check the latency between two Tailscale nodes at the Tailscale layer; the first couple of pings will likely be delivered through a nearby DERP server until the NAT traversal magic is able to kick in:
tailscale ping snikket # [tl! .cmd] pong from snikket (100.75.110.50) via DERP(nyc) in 34ms # [tl! .nocopy:3] pong from snikket (100.75.110.50) via DERP(nyc) in 35ms pong from snikket (100.75.110.50) via DERP(nyc) in 35ms pong from snikket (100.75.110.50) via [PUBLIC_IP]:41641 in 23ms The tailscale netcheck command will give me some details about my local Tailscale node, like whether it's able to pass UDP traffic, which DERP server is the closest, and the latency to all Tailscale DERP servers:
tailscale netcheck # [tl! .cmd] # [tl! .nocopy:start] Report: * UDP: true * IPv4: yes, [LOCAL_PUBLIC_IP]:52661 * IPv6: no * MappingVariesByDestIP: false * HairPinning: false * PortMapping: * Nearest DERP: Chicago * DERP latency: - ord: 23.4ms (Chicago) - dfw: 26.8ms (Dallas) - nyc: 28.6ms (New York City) - sea: 71.5ms (Seattle) - sfo: 77.8ms (San Francisco) - lhr: 102.2ms (London) - fra: 114.8ms (Frankfurt) - sao: 133.1ms (São Paulo) - tok: 154.9ms (Tokyo) - syd: 215.3ms (Sydney) - sin: 243.7ms (Singapore) - blr: 244.6ms (Bangalore) # [tl! .nocopy:end] Tailscale management Now that the Tailscale client is installed on my devices and I've verified that they can talk to each other, it might be a good time to log in at login.tailscale.com↗ to take a look at the Tailscale admin console. Subnets and Exit Nodes See how the vyos node has little labels on it about &quot;Subnets (!)&quot; and &quot;Exit Node (!)&quot;? The exclamation marks are there because the node is advertising subnets and its exit node eligibility, but those haven't actually been turned on it. To enable the vyos node to function as a subnet router (for the 172.16.0.0/16 and 192.168.1.0/24 networks listed beneath its Tailscale IP) and as an exit node (for internet-bound traffic from other Tailscale nodes), I need to click on the little three-dot menu icon at the right edge of the row and select the &quot;Edit route settings...&quot; option. Now I can approve the subnet routes (individually or simultaneously and at the same time) and allow the node to route traffic to the internet as well6. Cool! But now that's giving me another warning...
Key expiry By default, Tailscale expires each node's encryption keys every 180 days↗. This improves security (particularly over vanilla WireGuard, which doesn't require any key rotation) but each node will need to reauthenticate (via tailscale up) in order to get a new key. It may not make sense to do that for systems acting as subnet routers or exit nodes since they would stop passing all Tailscale traffic once the key expires. That would also hurt for my cloud servers which are only accessible via Tailscale; if I can't log in through SSH (since it's blocked at the firewall) then I can't reauthenticate Tailscale to regain access. For those systems, I can click that three-dot menu again and select the &quot;Disable key expiry&quot; option. I tend to do this for my &quot;always on&quot; tailnet members and just enforce the key expiry for my &quot;client&quot; type devices which could potentially be physically lost or stolen.
Configuring DNS It's great that all my Tailscale machines can talk to each other directly by their respective Tailscale IP addresses, but who wants to keep up with IPs? I sure don't. Let's do some DNS. I'll start out by clicking on the DNS↗ tab in the admin console. I need to add a Global Nameserver before I can enable MagicDNS so I'll click on the appropriate button to enter in the Tailscale IP7 of my home DNS server (which is using NextDNS↗ as the upstream resolver). I'll also enable the toggle to &quot;Override local DNS&quot; to make sure all queries from connected clients are going through this server (and thus extend the NextDNS protection to all clients without having to configure them individually). I can also define search domains to be used for unqualified DNS queries by adding another name server with the same IP address, enabling the &quot;Restrict to search domain&quot; option, and entering the desired domain: This will let me resolve hostnames when connected remotely to my lab without having to type the domain suffix (ex, vcsa versus vcsa.lab.bowdre.net).
And, finally, I can click the &quot;Enable MagicDNS&quot; button to turn on the magic. This adds a new nameserver with a private Tailscale IP which will resolve Tailscale hostnames to their internal IP addresses.
Now I can log in to my Matrix server by simply typing ssh matrix. Woohoo!
Access control Right now, all of my Tailscale nodes can access all of my other Tailscale nodes. That's certainly very convenient, but I'd like to break things up a bit. I can use access control policies to define which devices should be able to talk to which other devices, and I can use ACL tags↗ to logically group resources together to make this easier.
Tags I'm going to use three tags in my tailnet:
tag:home to identify servers in my home network which will have access to all other servers. tag:cloud to identify my cloud servers which will only have access to other cloud servers. tag:client to identify client-type devices which will be able to access all nodes in the tailnet. Before I can actually apply these tags to any of my machines, I first need to define tagOwners for each tag which will determine which users (in my organization of one) will be able to use the tags. This is done by editing the policy file available on the Access Controls↗ tab of the admin console.
This ACL file uses a format called HuJSON↗, which is basically JSON but with support for inline comments and with a bit of leniency when it comes to trailing commas. That makes a config file that is easy for both humans and computers to parse.
I'm going to start by creating a group called admins and add myself to that group. This isn't strictly necessary since I am the only user in the organization, but I feel like it's a nice practice anyway. Then I'll add the tagOwners section to map each tag to its owner, the new group I just created:
// torchlight! {&#34;lineNumbers&#34;: true} { &#34;groups&#34;: { &#34;group:admins&#34;: [&#34;[email protected]&#34;], }, &#34;tagOwners&#34;: { &#34;tag:home&#34;: [&#34;group:admins&#34;], &#34;tag:cloud&#34;: [&#34;group:admins&#34;], &#34;tag:client&#34;: [&#34;group:admins&#34;] } } Now I have two options for applying tags to devices. I can either do it from the admin console, or by passing the --advertise-tags flag to the tailscale up CLI command. I touched on the CLI approach earlier so I'll go with the GUI approach this time. It's simple - I just go back to the Machines↗ tab, click on the three-dot menu button for a machine, and select the &quot;Edit ACL tags...&quot; option. I can then pick the tag (or tags!) I want to apply: The applied tags have now replaced the owner information which was previously associated with each machine: ACLs By default, Tailscale implements an implicit &quot;Allow All&quot; ACL. As soon as you start modifying the ACL, though, that switches to an implicit &quot;Deny All&quot;. So I'll add new rules to explicitly state what communication should be permitted and everything else will be blocked.
Each ACL rule consists of four named parts:
action - what to do with the traffic, the only valid option is allow. users - a list of traffic sources, can be specific users, Tailscale IPs, hostnames, subnets, groups, or tags. proto - (optional) protocol for the traffic which should be permitted. ports - a list of destinations (and optional ports). So I'll add this to the top of my policy file:
// torchlight! {&#34;lineNumbers&#34;: true} { &#34;acls&#34;: [ { // home servers can access other servers &#34;action&#34;: &#34;accept&#34;, &#34;users&#34;: [&#34;tag:home&#34;], &#34;ports&#34;: [ &#34;tag:home:*&#34;, &#34;tag:cloud:*&#34; ] }, { // clients can access everything &#34;action&#34;: &#34;accept&#34;, &#34;users&#34;: [&#34;tag:client&#34;], &#34;ports&#34;: [&#34;*:*&#34;] } ] } This policy becomes active as soon as I click the Save button at the bottom of the page, and I notice a problem very shortly. Do you see it?
Earlier I configured Tailscale to force all nodes to use my home DNS server for resolving all queries, and I just set an ACL which prevents my cloud servers from talking to my home servers... which includes the DNS server. I can think of two ways to address this:
Re-register the servers by passing the --accept-dns=false flag to tailscale up so they'll ignore the DNS configured in the admin console. Add a new ACL rule to allow DNS traffic to reach the DNS server from the cloud. Option 2 sounds better to me so that's what I'm going to do. Instead of putting an IP address directly into the ACL rule I'd rather use a hostname, and unfortunately the Tailscale host names aren't available within ACL rule declarations. But I can define a host alias in the policy to map a friendly name to the IP:
// torchlight! {&#34;lineNumbers&#34;: true} { &#34;hosts&#34;: { &#34;win01&#34;: &#34;100.124.116.125&#34; } } And I can then create a new rule for &quot;users&quot;: [&quot;tag:cloud&quot;] to add an exception for win01:53:
// torchlight! {&#34;lineNumbers&#34;: true} { &#34;acls&#34;: [ { // cloud servers can only access other cloud servers plus my internal DNS server &#34;action&#34;: &#34;accept&#34;, &#34;users&#34;: [&#34;tag:cloud&#34;], &#34;ports&#34;: [ &#34;win01:53&#34; ] } ] } And that gets DNS working again for my cloud servers while still serving the results from my NextDNS configuration. Here's the complete policy configuration:
// torchlight! {&#34;lineNumbers&#34;: true} { &#34;acls&#34;: [ { // home servers can access other servers &#34;action&#34;: &#34;accept&#34;, &#34;users&#34;: [&#34;tag:home&#34;], &#34;ports&#34;: [ &#34;tag:home:*&#34;, &#34;tag:cloud:*&#34; ] }, { // cloud servers can only access my internal DNS server &#34;action&#34;: &#34;accept&#34;, &#34;users&#34;: [&#34;tag:cloud&#34;], &#34;ports&#34;: [ &#34;win01:53&#34; ] }, { // clients can access everything &#34;action&#34;: &#34;accept&#34;, &#34;users&#34;: [&#34;tag:client&#34;], &#34;ports&#34;: [&#34;*:*&#34;] } ], &#34;hosts&#34;: { &#34;win01&#34;: &#34;100.124.116.125&#34; }, &#34;groups&#34;: { &#34;group:admins&#34;: [&#34;[email protected]&#34;], }, &#34;tagOwners&#34;: { &#34;tag:home&#34;: [&#34;group:admins&#34;], &#34;tag:cloud&#34;: [&#34;group:admins&#34;], &#34;tag:client&#34;: [&#34;group:admins&#34;] } } Wrapping up This post has really only scratched the surface on the cool capabilities provided by Tailscale. I didn't even get into its options for enabling HTTPS with valid certificates↗, custom DERP servers↗, sharing tailnet nodes with other users↗, or file transfer using Taildrop↗. The Next Steps↗ section of the official Getting Started doc has some other cool ideas for Tailscale-powered use cases, and there are a ton more listed in the Solutions↗ and Guides↗ categories as well.
It's amazing to me that a VPN solution can be this simple to set up and manage while still offering such incredible flexibility. Tailscale is proof that secure networking doesn't have to be hard.
Plus the GCP egress charges started to slowly stack up to a few ones of dollars each month.&#160;&#x21a9;&#xfe0e;
May I just say that I love this acronym?&#160;&#x21a9;&#xfe0e;
There's also a reasonably-priced Personal Pro option which comes with 100 devices, 2 routers, and custom auth periods for $48/year. I'm using that since it's less than I was going to spend on WireGuard egress through GCP and I want to support the project in a small way.&#160;&#x21a9;&#xfe0e;
Before being able to assign tags at the command line, you must first define tag owners who can manage the tag. On a personal account, you've only got one user to worry with but you still have to set this up first. I'll go over this in a bit but here's the documentation↗ if you want to skip ahead.&#160;&#x21a9;&#xfe0e;
I could also connect using the Tailscale hostname, if MagicDNS↗ is enabled - but I'm getting ahead of myself.&#160;&#x21a9;&#xfe0e;
Once subnets are allowed, they're made available to all members of the tailnet so that traffic destined for those networks can be routed accordingly. Clients will need to opt-in to using the Exit Node though; I typically only do that when I'm on a wireless network I don't control and want to make sure that no one can eavesdrop on my internet traffic, but I like to have that option available for when I need it.&#160;&#x21a9;&#xfe0e;
Using the Tailscale IP will allow queries to go straight to the DNS server without having to go through the VyOS router first. Configuring my clients to use a tailnet node for DNS queries also has the added benefit of sending that traffic through the encrypted tunnels instead of across the internet in the clear. I get secure DNS without having to configure secure DNS!&#160;&#x21a9;&#xfe0e;
`,description:"Tailscale makes it easy to set up and manage a secure network by building a flexible control plane on top of a high-performance WireGuard VPN.",url:"https://runtimeterror.dev/secure-networking-made-simple-with-tailscale/"},"https://runtimeterror.dev/snikket-private-xmpp-chat-on-oracle-cloud-free-tier/":{title:"Snikket Private XMPP Chat on Oracle Cloud Free Tier",tags:["linux","cloud","docker","containers","chat","selfhosting","caddy"],content:`Non-technical users deserve private communications, too.
I shared a few months back about the steps I took to deploy my own Matrix↗ homeserver instance, and I've happily been using the Element↗ client for secure end-to-end encrypted chats with a small group of my technically-inclined friends. Being able to have private conversations without having to trust a single larger provider (unlike like Signal or WhatsApp) is pretty great. Of course, many Matrix users just create accounts directly on the matrix.org homeserver which kind of hurts the &quot;decentralized&quot; aspect of the network, and relatively few bother with hosting their own homeserver. There are also multiple options for homeserver software and dozens of Matrix clients each with their own set of features and limitations. As a result, an individual's experience with Matrix could vary wildly depending on what combination of softwares they are using. And it can be difficult to get non-technical users (like my family) on board with a new communication platform when iMessage works just fine1.
I recently came across the Snikket project↗, which aims↗ to make decentralized end-to-end encrypted personal messaging simple and accessible for everyone, with an emphasis on providing a consistent experience across the network. Snikket does this by maintaining a matched set of server and client2 software with feature and design parity, making it incredibly easy to deploy and manage the server, and simplifying user registration with invite links. In contrast to Matrix, Snikket does not operate an open server on which users can self-register but instead requires users to be invited to a hosted instance. The idea is that a server would be used by small groups of family and friends where every user knows (and trusts!) the server operator while also ensuring the complete decentralization of the network3.
How simple is the server install?
I spun up a quick @snikket_im XMPP server last night to check out the project - and I do mean QUICK. It took me longer to register a new domain than to deploy the server on GCP and create my first account through the client.
— John (@johndotbowdre) November 18, 2021
Seriously, their 4-step quick-start guide↗ is so good that I didn't feel the need to do a blog post about my experience. I've now been casually using Snikket for a bit over month and remain very impressed both by the software and the project itself, and have even deployed a new Snikket instance for my family to use. My parents were actually able to join the chat without any issues, which is a testament to how easy it is from a user perspective too.
A few days ago I migrated my original Snikket instance from Google Cloud (GCP) to the same Oracle Cloud Infrastructure (OCI) virtual server that's hosting my Matrix homeserver so I thought I might share some notes first on the installation process. At the end, I'll share the tweaks which were needed to get Snikket to run happily alongside Matrix.
Infrastructure setup You can refer to my notes from last time for details on how I created the Ubuntu 20.04 VM and configured the firewall rules both at the cloud infrastructure level as well as within the host using iptables. Snikket does need a few additional firewall ports↗ beyond what was needed for my Matrix setup:
Port(s) Transport Purpose 80, 443 TCP Web interface and group file sharing 3478-3479 TCP/UDP Audio/Video data proxy negotiation and discovery (STUN/TURN↗) 5349-5350 TCP/UDP Audio/Video data proxy negotiation and discovery (STUN/TURN over TLS) 5000 TCP File transfer proxy 5222 TCP Connections from clients 5269 TCP Connections from other servers 60000-601004 UDP Audio/Video data proxy (TURN data) As a gentle reminder, Oracle's iptables configuration inserts a REJECT all rule at the bottom of each chain. I needed to make sure that each of my ALLOW rules get inserted above that point. So I used iptables -L INPUT --line-numbers to identify which line held the REJECT rule, and then used iptables -I INPUT [LINE_NUMBER] -m state --state NEW -p [PROTOCOL] --dport [PORT] -j ACCEPT to insert the new rules above that point.
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 80 -j ACCEPT # [tl! .cmd:start] sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 443 -j ACCEPT sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dports 3478-3479 -j ACCEPT sudo iptables -I INPUT 9 -m state --state NEW -p tcp -m multiport --dports 3478-3479 -j ACCEPT sudo iptables -I INPUT 9 -m state --state NEW -p tcp -m multiport --dports 3478,3479 -j ACCEPT sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 5000 -j ACCEPT sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 5222 -j ACCEPT sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 5269 -j ACCEPT sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 3478,3479 -j ACCEPT sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 5349,5350 -j ACCEPT sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 60000:60100 -j ACCEPT # [tl! .cmd:end] Then to verify the rules are in the right order:
sudo iptables -L INPUT --line-numbers -n # [tl! .cmd] Chain INPUT (policy ACCEPT) # [tl! .nocopy:start] num target prot opt source destination 1 ts-input all -- 0.0.0.0/0 0.0.0.0/0 2 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 3 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 4 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 5 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:123 6 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 7 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443 8 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 9 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW multiport dports 5349,5350 10 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW multiport dports 60000:60100 11 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW multiport dports 3478,3479 12 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5269 13 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5222 14 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5000 15 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW multiport dports 3478,3479 16 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited # [tl! .nocopy:end] Before moving on, it's important to save them so the rules will persist across reboots!
sudo netfilter-persistent save # [tl! .cmd] run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save # [tl! .nocopy:1] run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save I also needed to create three DNS records with my domain registrar:
# Domain TTL Class Type Target chat.vpota.to 300 IN A 132.145.174.39 groups.vpota.to 300 IN CNAME chat.vpota.to share.vpota.to 300 IN CNAME chat.vpota.to Install docker and docker-compose Snikket is distributed as a set of docker containers which makes it super easy to get up and running on basically any Linux system. But, of course, you'll first need to install docker↗
# Update package index sudo apt update # [tl! .cmd] # Install prereqs sudo apt install ca-certificates curl gnupg lsb-release # [tl! .cmd] # Add docker&#39;s GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg # [tl! .cmd] # Add the docker repo echo \\ # [tl! .cmd] &#34;deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \\ $(lsb_release -cs) stable&#34; | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null # Refresh the package index with the new repo added sudo apt update # [tl! .cmd] # Install docker sudo apt install docker-ce docker-ce-cli containerd.io # [tl! .cmd] And install docker-compose also to simplify the container management:
# Download the docker-compose binary sudo curl -L &#34;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&#34; -o /usr/local/bin/docker-compose # [tl! .cmd] # Make it executable sudo chmod +x /usr/local/bin/docker-compose # [tl! .cmd] Now we're ready to...
Install Snikket This starts with just making a place for Snikket to live:
sudo mkdir /etc/snikket # [tl! .cmd:1] cd /etc/snikket And then grabbing the Snikket docker-compose file:
sudo curl -o docker-compose.yml https://snikket.org/service/resources/docker-compose.beta.yml # [tl! .cmd] And then creating a very minimal configuration file:
sudo vi snikket.conf # [tl! .cmd] A basic config only needs two parameters:
Parameter Description SNIKKET_DOMAIN The fully-qualified domain name that clients will connect to SNIKKET_ADMIN_EMAIL An admin contact for the server That's it.
In my case, I'm going to add two additional parameters to restrict the UDP TURN port range that I set in my firewalls above.
So here's my config:
# torchlight! {&#34;lineNumbers&#34;: true} SNIKKET_DOMAIN=chat.vpota.to [email protected] # Limit UDP port range SNIKKET_TWEAK_TURNSERVER_MIN_PORT=60000 SNIKKET_TWEAK_TURNSERVER_MAX_PORT=60100 Start it up! With everything in place, I can start up the Snikket server:
sudo docker-compose up -d # [tl! .cmd] This will take a moment or two to pull down all the required container images, start them, and automatically generate the SSL certificates. Very soon, though, I can point my browser to https://chat.vpota.to and see a lovely login page - complete with an automagically-valid-and-trusted certificate: Of course, I don't yet have a way to log in, and like I mentioned earlier Snikket doesn't offer open user registration. Every user (even me, the admin!) has to be invited. Fortunately I can generate my first invite directly from the command line:
sudo docker exec snikket create-invite --admin --group default # [tl! .cmd] That command will return a customized invite link which I can copy and paste into my browser. If I've got a mobile device handy, I can go ahead and install the client there to get started; the app will even automatically generate a secure password5 so that I (and my users) don't have to worry about it. Otherwise, clicking the register an account manually link at the bottom of the screen lets me create a username and password directly.
With shiny new credentials in hand, I can log in at the web portal to manage my account or access the the admin panel.
Invite more users Excellent, I've got a private chat server but no one to chat privately with6. Time to fix that, eh? I could use that docker exec snikket create-invite command line again to create another invite link, but I think I'll do that through the admin panel instead. Before I get into the invite process, I'm going to take a brief detour to discuss circles. For those of you who didn't make a comfortable (though short-lived) home on Google+7, Snikket uses the term circle to refer to social circles within a local community. Each server gets a circle created by default, and new users will be automatically added to that circle. Users within the same circle will appear in each other's contact list and will also be added into a group chat together.
It might make sense to use circles to group users based on how they know each other. There could be a circle for family, a circle for people who work(ed) together, a circle for the members of a club, and a circle for The Gang that gets together every couple of weeks for board games.
The Manage circles button lets you, well, manage circles: create them, add/remove users to them, delete them, whatever. I just created a new circle called &quot;Because Internets&quot;.
That yellow link icon in the Actions column will generate a one-time use link that could be used to invite an individual user to create an account on the server and join the selected circle.
Instead, I'll go back to the admin area and click on the Manage invitations button, which will give me a bit more control over how the invite works. As you can see, this page includes a toggle to select the invitation type:
Individual invitations are single-use links so should be used for inviting one user at a time. Group invitation links can be used multiple times by multiple users. Whichever type is selected, I also need to select the time period for which the invitation link will be valid as well as which circle anyone who accepts the invitation will be added to. And then I can generate and share the new invite link as needed. Advanced configuration Okay, so that covers everything that's needed for a standard Snikket installation in OCI. The firewall configuration in particular would have been much simpler in GCP (where I could have used ufw) but I still think it's overall pretty straight forward. But what about my case where I wanted to put Snikket on the same server as my Matrix instance? Or the steps I needed to move Snikket from the GCP server onto the OCI server?
Let's start with adjusting the Caddy reverse proxy to accomodate Snikket since that will be needed before I can start Snikket on the new server.
Caddy reverse proxy Remember Caddy↗ from last time? It's a super-handy easy-to-configure web server, and it greatly simplified the reverse proxy configuration needed for my Matrix instance.
One of the really cool things about Caddy is that it automatically generates SSL certificates for any domain name you tell it to serve (as long as the domain ownership can be validated through the ACME↗ challenge process, of course). But Snikket already handles its own certificates so I'll need to make sure that Caddy doesn't get in the way of those challenges.
Fortunately, the Snikket reverse proxy documentation↗ was recently updated with a sample config for making this happen. Matrix and Snikket really only overlap on ports 80 and 443 so those are the only ports I'll need to handle, which lets me go for the &quot;Basic&quot; configuration instead of the &quot;Advanced&quot; one. I can just adapt the sample config from the documentation and add that to my existing /etc/caddy/Caddyfile alongside the config for Matrix:
# torchlight! {&#34;lineNumbers&#34;: true} http://chat.vpota.to, # [tl! focus:start] http://groups.chat.vpota.to, http://share.chat.vpota.to { reverse_proxy localhost:5080 } chat.vpota.to, groups.chat.vpota.to, share.chat.vpota.to { reverse_proxy https://localhost:5443 { transport http { tls_insecure_skip_verify } } } # [tl! focus:end] matrix.bowdre.net { reverse_proxy /_matrix/* http://localhost:8008 reverse_proxy /_synapse/client/* http://localhost:8008 } bowdre.net { route { respond /.well-known/matrix/server \`{&#34;m.server&#34;: &#34;matrix.bowdre.net:443&#34;}\` redir https://virtuallypotato.com } } So Caddy will be listening on port 80 for traffic to http://chat.vpota.to, http://groups.chat.vpota.to, and http://share.chat.vpota.to, and will proxy that HTTP traffic to the Snikket instance on port 5080. Snikket will automatically redirect HTTP traffic to HTTPS except in the case of the required ACME challenges so that the certs can get renewed. It will also listen on port 443 for traffic to the same hostnames and will pass that into Snikket on port 5443 without verifying certs between the backside of the proxy and the front side of Snikket. This is needed since there isn't an easy way to get Caddy to trust the certificates used internally by Snikket8.
And then any traffic to matrix.bowdre.net or bowdre.net still gets handled as described in that other post.
Did you notice that Snikket will need to get reconfigured to listen on 5080 and 5443 now? We'll get to that in just a minute. First, let's get the data onto the new server.
Migrating a Snikket instance Since Snikket is completely containerized, moving between hosts is a simple matter of transferring the configuration and data.
The Snikket team has actually put together a couple of scripts to assist with backing up↗ and restoring↗ an instance. I just adapted the last line of each to do what I needed:
sudo docker run --rm --volumes-from=snikket \\ # [tl! .cmd] -v &#34;/home/john/snikket-backup/&#34;:/backup debian:buster-slim \\ tar czf /backup/snikket-&#34;$(date +%F-%H%m)&#34;.tar.gz /snikket That will drop a compressed backup of the snikket_data volume into the specified directory, /home/john/snikket-backup/. While I'm at it, I'll also go ahead and copy the docker-compose.yml and snikket.conf files from /etc/snikket/:
sudo cp -a /etc/snikket/* /home/john/snikket-backup/ # [tl! .cmd] ls -l /home/john/snikket-backup/ # [tl! .cmd] total 1728 # [tl! .nocopy:3] -rw-r--r-- 1 root root 993 Dec 19 17:47 docker-compose.yml -rw-r--r-- 1 root root 1761046 Dec 19 17:46 snikket-2021-12-19-1745.tar.gz -rw-r--r-- 1 root root 299 Dec 19 17:47 snikket.conf And I can then zip that up for easy transfer:
tar cvf /home/john/snikket-backup.tar.gz /home/john/snikket-backup/ # [tl! .cmd] This would be a great time to go ahead and stop this original Snikket instance. After all, nothing that happens after the backup was exported is going to carry over anyway.
sudo docker-compose down # [tl! .cmd] Update DNS
This is also a great time to update the A record for chat.vpota.to so that it points to the new server. It will need a little bit of time for the change to trickle out, and the updated record really needs to be in place before starting Snikket on the new server so that there aren't any certificate problems.
Now I just need to transfer the archive from one server to the other. I've got Tailscale↗9 running on my various cloud servers so that they can talk to each other through a secure WireGuard tunnel (remember WireGuard?) without having to open any firewall ports between them, and that means I can just use scp to transfer the file without any fuss. I can even leverage Tailscale's Magic DNS↗ feature to avoid worrying with any IPs, just the hostname registered in Tailscale (chat-oci):
scp /home/john/snikket-backup.tar.gz chat-oci:/home/john/ # [tl! .cmd] Next, I SSH in to the new server and unzip the archive:
ssh snikket-oci-server # [tl! .cmd:3] tar xf snikket-backup.tar.gz cd snikket-backup ls -l total 1728 # [tl! .nocopy:3] -rw-r--r-- 1 root root 993 Dec 19 17:47 docker-compose.yml -rw-r--r-- 1 root root 1761046 Dec 19 17:46 snikket-2021-12-19-1745.tar.gz -rw-r--r-- 1 root root 299 Dec 19 17:47 snikket.conf Before I can restore the content of the snikket-data volume on the new server, I'll need to first go ahead and set up Snikket again. I've already got docker and docker-compose installed from when I installed Matrix so I'll skip to creating the Snikket directory and copying in the docker-compose.yml and snikket.conf files.
sudo mkdir /etc/snikket # [tl! .cmd:3] sudo cp docker-compose.yml /etc/snikket/ sudo cp snikket.conf /etc/snikket/ cd /etc/snikket Before I fire this up on the new host, I need to edit the snikket.conf to tell Snikket to use those different ports defined in the reverse proxy configuration using a couple of SNIKKET_TWEAK_* lines↗:
# torchlight! {&#34;lineNumbers&#34;: true} SNIKKET_DOMAIN=chat.vpota.to [email protected] SNIKKET_TWEAK_HTTP_PORT=5080 SNIKKET_TWEAK_HTTPS_PORT=5443 SNIKKET_TWEAK_TURNSERVER_MIN_PORT=60000 SNIKKET_TWEAK_TURNSERVER_MAX_PORT=60100 Alright, let's start up the Snikket server:
sudo docker-compose up -d # [tl! .cmd] After a moment or two, I can point a browser to https://chat.vpota.to and see the login screen (with a valid SSL certificate!) but I won't actually be able to log in. As far as Snikket is concerned, this is a brand new setup.
Now I can borrow the last line from the restore.sh script↗ to bring in my data:
sudo docker run --rm --volumes-from=snikket \\ # [tl! .cmd] --mount type=bind,source=&#34;/home/john/snikket-backup/snikket-2021-12-19-1745.tar.gz&#34;,destination=/backup.tar.gz \\ debian:buster-slim \\ bash -c &#34;rm -rf /snikket/*; tar xvf /backup.tar.gz -C /&#34; If I refresh the login page I can now log back in with my account and verify that everything is just the way I left it back on that other server: And I can open the Snikket client on my phone and get back to chatting - this migration was a success!
John laughed at a message.&#160;&#x21a9;&#xfe0e;
Snikket currently has clients for Android↗ and iOS↗ with plans to add a web client in the future.&#160;&#x21a9;&#xfe0e;
That said, Snikket is built on the XMPP messaging standard↗ which means it can inter-operate with other XMPP-based chat networks, servers, and clients - though of course the best experience will be had between Snikket servers and clients.&#160;&#x21a9;&#xfe0e;
By default Snikket can use any UDP port in the range 49152-65535 for TURN call data but restricting it to 100 ports should be sufficient↗ for most small servers.&#160;&#x21a9;&#xfe0e;
It's also easy for the administrator to generate password reset links for users who need a new password.&#160;&#x21a9;&#xfe0e;
The ultimate in privacy?&#160;&#x21a9;&#xfe0e;
Too soon? Yep. Still too soon.&#160;&#x21a9;&#xfe0e;
Remember that both Caddy and Snikket are managing their own fully-valid certificates in this scenario, but they don't necessarily know that about each other.&#160;&#x21a9;&#xfe0e;
More on how I use Tailscale here!&#160;&#x21a9;&#xfe0e;
`,description:"Notes on installing a Snikket XMPP chat instance alongside a Matrix instance on an Oracle Cloud free tier server",url:"https://runtimeterror.dev/snikket-private-xmpp-chat-on-oracle-cloud-free-tier/"},"https://runtimeterror.dev/script-to-convert-posts-to-hugo-page-bundles/":{title:"Script to Convert Posts to Hugo Page Bundles",tags:["hugo","meta","shell"],content:`In case you missed the news, I recently migrated this blog from a site built with Jekyll to one built with Hugo. One of Hugo's cool features is the concept of Page Bundles↗, which bundle a page's resources together in one place instead of scattering them all over the place.
Let me illustrate this real quick-like. Focusing only on the content-generating portions of a Hugo site directory might look something like this:
site ├── content │ └── post │ ├── first-post.md │ ├── second-post.md │ └── third-post.md └── static └── images ├── logo.png └── post ├── first-post-image-1.png ├── first-post-image-2.png ├── first-post-image-3.png ├── second-post-image-1.png ├── second-post-image-2.png ├── third-post-image-1.png ├── third-post-image-2.png ├── third-post-image-3.png └── third-post-image-4.png So the article contents go under site/content/post/ in a file called name-of-article.md. Each article may embed image (or other file types), and those get stored in site/static/images/post/ and referenced like ![Image for first post](/images/post/first-post-image-1.png). When Hugo builds a site, it processes the stuff under the site/content/ folder to render the Markdown files into browser-friendly HTML pages but it doesn't process anything in the site/static/ folder; that's treated as static content and just gets dropped as-is into the resulting site.
It's functional, but things can get pretty messy when you've got a bunch of image files and are struggling to keep track of which images go with which post.
Like I mentioned earlier, Hugo's Page Bundles group a page's resources together in one place. Each post gets its own folder under site/content/ and then all of the other files it needs to reference can get dropped in there too. With Page Bundles, the folder tree looks like this:
site ├── content │ └── post │ ├── first-post │ │ ├── first-post-image-1.png │ │ ├── first-post-image-2.png │ │ ├── first-post-image-3.png │ │ └── index.md │ ├── second-post │ │ ├── index.md │ │ ├── second-post-image-1.png │ │ └── second-post-image-2.png │ └── third-post │ ├── index.md │ ├── third-post-image-1.png │ ├── third-post-image-2.png │ ├── third-post-image-3.png │ └── third-post-image-4.png └── static └── images └── logo.png Images and other files are now referenced in the post directly like ![Image for post 1](/first-post-image-1.png), and this makes it a lot easier to keep track of which images go with which post. And since the files aren't considered to be static anymore, Page Bundles enables Hugo to perform certain Image Processing tasks↗ when the site gets built.
Anyway, I wanted to start using Page Bundles but didn't want to have to manually go through all my posts to move the images and update the paths so I spent a few minutes cobbling together a quick script to help me out. It's pretty similar to the one I created to help migrate images from Hashnode to my Jekyll site last time around - and, like that script, it's not pretty, polished, or flexible in the least, but it did the trick for me.
This one needs to be run from one step above the site root (../site/ in the example above), and it gets passed the relative path to a post (site/content/posts/first-post.md). From there, it will create a new folder with the same name (site/content/posts/first-post/) and move the post into there while renaming it to index.md (site/content/posts/first-post/index.md).
It then looks through the newly-relocated post to find all the image embeds. It moves the image files into the post directory, and then updates the post to point to the new image locations.
Next it updates the links for any thumbnail images mentioned in the front matter post metadata. In most of my past posts, I reused an image already embedded in the post as the thumbnail so those files would already be moved by the time the script gets to that point. For the few exceptions, it also needs to move those image files over as well.
Lastly, it changes the usePageBundles flag from false to true so that Hugo knows what we've done.
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash # Hasty script to convert a given standard Hugo post (where the post content and # images are stored separately) to a Page Bundle (where the content and images are # stored together in the same directory). # # Run this from the directory directly above the site root, and provide the relative # path to the existing post that needs to be converted. # # Usage: ./convert-to-pagebundle.sh vpotato/content/posts/hello-hugo.md inputPost=&#34;$1&#34; # vpotato/content/posts/hello-hugo.md postPath=$(dirname $inputPost) # vpotato/content/posts postTitle=$(basename $inputPost .md) # hello-hugo newPath=&#34;$postPath/$postTitle&#34; # vpotato/content/posts/hello-hugo newPost=&#34;$newPath/index.md&#34; # vpotato/content/posts/hello-hugo/index.md siteBase=$(echo &#34;$inputPost&#34; | awk -F/ &#39;{ print $1 }&#39;) # vpotato mkdir -p &#34;$newPath&#34; # make &#39;hello-hugo&#39; dir mv &#34;$inputPost&#34; &#34;$newPost&#34; # move &#39;hello-hugo.md&#39; to &#39;hello-hugo/index.md&#39; imageLinks=($(grep -o -P &#39;(?&lt;=!\\[)(?:[^\\]]+)\\]\\(([^\\)]+)&#39; $newPost | grep -o -P &#39;/images.*&#39;)) # Ex: &#39;/images/posts/image-name.png&#39; imageFiles=($(for file in \${imageLinks[@]}; do basename $file; done)) # Ex: &#39;image-name.png&#39; imagePaths=($(for file in \${imageLinks[@]}; do echo &#34;$siteBase/static$file&#34;; done)) # Ex: &#39;vpotato/static/images/posts/image-name.png&#39; for index in \${!imagePaths[@]}; do mv \${imagePaths[index]} $newPath # vpotato/static/images/posts/image-name.png --&gt; vpotato/content/posts/hello-hugo/image-name.png sed -i &#34;s^\${imageLinks[index]}^\${imageFiles[index]}^&#34; $newPost done thumbnailLink=$(grep -P &#39;^thumbnail:&#39; $newPost | grep -o -P &#39;images.*&#39;) # images/posts/thumbnail-name.png if [[ $thumbnailLink ]]; then thumbnailFile=$(basename $thumbnailLink) # thumbnail-name.png sed -i &#34;s|thumbnail: $thumbnailLink|thumbnail: $thumbnailFile|&#34; $newPost # relocate the thumbnail file if it hasn&#39;t already been moved if [[ ! -f &#34;$newPath/$thumbnailFile&#34; ]]; then mv &#34;$siteBase/static/$thumbnailLink&#34; &#34;$newPath&#34; done fi # enable page bundles sed -i &#34;s|usePageBundles: false|usePageBundles: true|&#34; $newPost `,description:"A hacky script to convert traditional posts (with images stored separately) to a Hugo Page Bundle",url:"https://runtimeterror.dev/script-to-convert-posts-to-hugo-page-bundles/"},"https://runtimeterror.dev/hello-hugo/":{title:"Hello Hugo",tags:["meta","hugo"],content:`Oops, I did it again.
It wasn't all that long ago that I migrated this blog from Hashnode to a Jekyll site published via GitHub Pages. Well, a few weeks ago I learned a bit about another static site generator called Hugo↗, and I just had to give it a try. And I came away from my little experiment quite impressed!
While Jekyll is built on Ruby and requires you to install and manage a Ruby environment before being able to use it to generate a site, Hugo is built on Go and requires nothing more than the hugo binary. That makes it much easier for me to hop between devices. Getting started with Hugo is pretty damn simple↗, and Hugo provides some very cool built-in features↗ which Jekyll would need external plugins to provide. And there are of course plenty of lovely themes↗ to help your site look its best.
Hugo's real claim to fame, though, is its speed. Building a site with Hugo is much faster than with Jekyll, and that makes it quicker to test changes locally before pushing them out onto the internet.
Jekyll was a great way for me to get started on managing my own site with a SSG, but Hugo seems to me like a more modern approach. I decided to start working on migrating Virtually Potato over to Hugo. Hugo even made it easy to import my existing content with the hugo import jekyll command.
After a few hours spent trying out different themes, I landed on the Hugo Clarity theme↗ which is based on VMware's Clarity Design↗. This theme offers a user-selectable light/dark theme, lots of great enhancements for displaying code snippets, and a responsive mobile layout, and I just thought that incorporating some of VMware's style into this site felt somehow appropriate. It did take quite a bit of tweaking to get everything integrated and working the way I wanted it to (and to update the existing content to fit), but I learned a ton in the process so I consider that time well spent.
Along the way I also wanted to try out Netlify↗ for building and serving the site online instead of the rather bare-bones GitHub Pages that I'd been using. Like GitHub Pages, you can configure Netlify to watch a repository (on GitHub, GitLab, or Bitbucket) and it will fire off a build whenever new stuff is committed. By default, that latest build will be automatically published to your site, but Netlify also provides much more control of this process. You can pause publishing, manually publish a certain deployment, quickly rollback in case of any issues, and also preview deployments before they get published to the live site.
Putting Netlify in front of the repositories where my site content is stored also enabled a pretty seamless transition once I was ready to actually flip the switch on the new-and-improved Virtually Potato. I had actually been using Netlify to serve the Jekyll version of this site for a week or two. When it was time to change, I disabled the auto-publish feature to pin that version of the site and then reconfigured which repository Netlify was watching. That kicked off a new (unpublished) deploy of the new Hugo site and I was able to preview it to confirm that everything looked just as it had in my local environment. Once I was satisfied I just clicked a button to start publishing the Hugo-based deploy, and the new site was live, instantly - no messing with DNS records or worrying about certificates, that was all taken care of by Netlify.
Anyway, here we are: the new Virtually Potato, powered by Hugo and Netlify!
`,description:"I migrated my blog from a Jekyll site hosted on Github Pages to a Hugo site stored in Gitlab and published via Netlify",url:"https://runtimeterror.dev/hello-hugo/"},"https://runtimeterror.dev/fixing-403-error-ssc-8-6-vra-idm/":{title:"Fixing 403 error on SaltStack Config 8.6 integrated with vRA and vIDM",tags:["vmware","vra","lcm","salt","openssl","certs"],content:`I've been wanting to learn a bit more about SaltStack Config↗ so I recently deployed SSC 8.6 to my environment (using vRealize Suite Lifecycle Manager to do so as described here↗). I selected the option to integrate with my pre-existing vRA and vIDM instances so that I wouldn't have to manage authentication directly since I recall that the LDAP authentication piece was a little clumsy the last time I tried it.
The Problem Unfortunately I ran into a problem immediately after the deployment completed: Instead of being redirected to the vIDM authentication screen, I get a 403 Forbidden error.
I used SSH to log in to the SSC appliance as root, and I found this in the /var/log/raas/raas log file:
2021-11-05 18:37:47,705 [var.lib.raas.unpack._MEIV8zDs3.raas.mods.vra.params ][ERROR :252 ][Webserver:6170] SSL Exception - https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery may be using a self-signed certificate HTTPSConnectionPool(host=&#39;vra.lab.bowdre.net&#39;, port=443): Max retries exceeded with url: /csp/gateway/am/api/auth/discovery?username=service_type&amp;state=aHR0cHM6Ly9zc2MubGFiLmJvd2RyZS5uZXQvaWRlbnRpdHkvYXBpL2NvcmUvYXV0aG4vY3Nw&amp;redirect_uri=https%3A%2F%2Fssc.lab.bowdre.net%2Fidentity%2Fapi%2Fcore%2Fauthn%2Fcsp&amp;client_id=ssc-299XZv71So (Caused by SSLError(SSLCertVerificationError(1, &#39;[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)&#39;))) 2021-11-05 18:37:47,928 [tornado.application ][ERROR :1792][Webserver:6170] Uncaught exception GET /csp/gateway/am/api/loggedin/user/profile (192.168.1.100) HTTPServerRequest(protocol=&#39;https&#39;, host=&#39;ssc.lab.bowdre.net&#39;, method=&#39;GET&#39;, uri=&#39;/csp/gateway/am/api/loggedin/user/profile&#39;, version=&#39;HTTP/1.1&#39;, remote_ip=&#39;192.168.1.100&#39;) Traceback (most recent call last): File &#34;urllib3/connectionpool.py&#34;, line 706, in urlopen File &#34;urllib3/connectionpool.py&#34;, line 382, in _make_request File &#34;urllib3/connectionpool.py&#34;, line 1010, in _validate_conn File &#34;urllib3/connection.py&#34;, line 421, in connect File &#34;urllib3/util/ssl_.py&#34;, line 429, in ssl_wrap_socket File &#34;urllib3/util/ssl_.py&#34;, line 472, in _ssl_wrap_socket_impl File &#34;ssl.py&#34;, line 423, in wrap_socket File &#34;ssl.py&#34;, line 870, in _create File &#34;ssl.py&#34;, line 1139, in do_handshake ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076) Further, attempting to pull down that URL with curl also failed:
curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery # [tl! .cmd] curl: (60) SSL certificate problem: self signed certificate in certificate chain # [tl! .nocopy:5] More details here: https://curl.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. In my homelab, I am indeed using self-signed certificates. I also encountered the same issue in my lab at work, though, and I'm using certs issued by our enterprise CA there. I had run into a similar problem with previous versions of SSC, but the quick-and-dirty workaround to disable certificate verification↗ doesn't seem to work anymore.
The Solution Clearly I needed to import either the vRA system's certificate (for my homelab) or the certificate chain for my enterprise CA (for my work environment) into SSC's certificate store so that it will trust vRA. But how?
I fumbled around for a bit and managed to get the required certs added to the system certificate store so that my curl test would succeed, but trying to access the SSC web UI still gave me a big middle finger. I eventually found this documentation↗ which describes how to configure SSC to work with self-signed certs, and it held the missing detail of how to tell the SaltStack Returner-as-a-Service (RaaS) component that it should use that system certificate store.
So here's what I did to get things working in my homelab:
Point a browser to my vRA instance, click on the certificate error to view the certificate details, and then export the CA certificate to a local file. (For a self-signed cert issued by LCM, this will likely be called something like Automatically generated one-off CA authority for vRA.) Open the file in a text editor, and copy the contents into a new file on the SSC appliance. I used ~/vra.crt. Append the certificate to the end of the system ca-bundle.crt: cat &lt;vra.crt &gt;&gt; /etc/pki/tls/certs/ca-bundle.crt # [tl! .cmd] Test that I can now curl from vRA without a certificate error: curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery # [tl! .cmd] {&#34;timestamp&#34;:1636139143260,&#34;type&#34;:&#34;CLIENT_ERROR&#34;,&#34;status&#34;:&#34;400 BAD_REQUEST&#34;,&#34;error&#34;:&#34;Bad Request&#34;,&#34;serverMessage&#34;:&#34;400 BAD_REQUEST \\&#34;Required String parameter &#39;state&#39; is not present\\&#34;&#34;} # [tl! .nocopy] Edit /usr/lib/systemd/system/raas.service to update the service definition so it will look to the ca-bundle.crt file by adding Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt above the ExecStart line:
# torchlight! {&#34;lineNumbers&#34;: true} # /usr/lib/systemd/system/raas.service [Unit] Description=The SaltStack Enterprise API Server After=network.target [Service] Type=simple User=raas Group=raas # to be able to bind port &lt; 1024 AmbientCapabilities=CAP_NET_BIND_SERVICE NoNewPrivileges=yes RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX AF_NETLINK PermissionsStartOnly=true ExecStartPre=/bin/sh -c &#39;systemctl set-environment FIPS_MODE=$(/opt/vmware/bin/ovfenv -q --key fips-mode)&#39; ExecStartPre=/bin/sh -c &#39;systemctl set-environment NODE_TYPE=$(/opt/vmware/bin/ovfenv -q --key node-type)&#39; Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt # [tl! focus] ExecStart=/usr/bin/raas TimeoutStopSec=90 [Install] WantedBy=multi-user.target Stop and restart the raas service: systemctl daemon-reload # [tl! .cmd:2] systemctl stop raas systemctl start raas And then try to visit the SSC URL again. This time, it redirects successfully to vIDM: Log in and get salty: The steps for doing this at work with an enterprise CA were pretty similar, with just slightly-different steps 1 and 2:
Access the enterprise CA and download the CA chain, which came in .p7b format. Use openssl to extract the individual certificates: openssl pkcs7 -inform PEM -outform PEM -in enterprise-ca-chain.p7b -print_certs &gt; enterprise-ca-chain.pem # [tl! .cmd] Copy it to the SSC appliance, and then pick up with Step 3 above.
I'm eager to dive deeper with SSC and figure out how best to leverage it with vRA. I'll let you know if/when I figure out any cool tricks!
In the meantime, maybe my struggles today can help you get past similar hurdles in your SSC deployments.
`,description:"",url:"https://runtimeterror.dev/fixing-403-error-ssc-8-6-vra-idm/"},"https://runtimeterror.dev/cloud-based-wireguard-vpn-remote-homelab-access/":{title:"Cloud-hosted WireGuard VPN for remote homelab access",tags:["linux","gcp","cloud","wireguard","vpn","homelab","tasker","automation","networking","security","selfhosting"],content:`For a while now, I've been using an OpenVPN Access Server↗ virtual appliance for remotely accessing my homelab. That's worked fine but it comes with a lot of overhead. It also requires maintaining an SSL certificate and forwarding three ports through my home router, in addition to managing a fairly complex software package and configurations. The free version of the OpenVPN server also only supports a maximum of two simultaneous connections. I recently ran into issues with the certbot automated SSL renewal process on my OpenVPN AS VM and decided that it might be time to look for a simpler solution.
I found that solution in WireGuard↗, which provides an extremely efficient secure tunnel implemented directly in the Linux kernel. It has a much smaller (and easier-to-audit) codebase, requires minimal configuration, and uses the latest crypto wizardry to securely connect multiple systems. It took me an hour or so of fumbling to get WireGuard deployed and configured on a fresh (and minimal) Ubuntu 20.04 VM running on my ESXi 7 homelab host, and I was pretty happy with the performance, stability, and resource usage of the new setup. That new VM idled at a full tenth of the memory usage of my OpenVPN AS, and it only required a single port to be forwarded into my home network.
Of course, I soon realized that the setup could be even better: I'm now running a WireGuard server on the Google Cloud free tier, and I've configured the VyOS virtual router I use for my homelab stuff to connect to that cloud-hosted server to create a secure tunnel between the two without needing to punch any holes in my local network (or consume any additional resources). I can then connect my client devices to the WireGuard server in the cloud. From there, traffic intended for my home network gets relayed to the VyOS router, and internet-bound traffic leaves Google Cloud directly. So my self-managed VPN isn't just good for accessing my home lab remotely, but also more generally for encrypting traffic when on WiFi networks I don't control - allowing me to replace the paid ProtonVPN subscription I had been using for that purpose.
It's a pretty slick setup, if I do say so myself. Anyway, this post will discuss how I implemented this, and what I learned along the way.
WireGuard Concepts, in Brief WireGuard does things a bit differently from other VPN solutions I've used in the past. For starters, there aren't any user accounts to manage, and in fact users don't really come into the picture at all. WireGuard also doesn't really distinguish between client and server; the devices on both ends of a tunnel connection are peers, and they use the same software package and very similar configurations. Each WireGuard peer is configured with a virtual network interface with a private IP address used for the tunnel network, and a configuration file tells it which tunnel IP(s) will be used by the other peer(s). Each peer has its own cryptographic private key, and the other peers get a copy of the corresponding public key added to their configuration so that all the peers can recognize each other and encrypt/decrypt traffic appropriately. This mapping of peer addresses to public keys facilitates what WireGuard calls Cryptokey Routing↗.
Once the peers are configured, all it takes is bringing up the WireGuard virtual interface on each peer to establish the tunnel and start passing secure traffic.
You can read a lot more fascinating details about how this all works back on the WireGuard homepage↗ (and even more in this protocol description↗) but this at least covers the key points I needed to grok prior to a successful initial deployment.
For my hybrid cloud solution, I also leaned heavily upon this write-up of a WireGuard Site-to-Site configuration↗ for how to get traffic flowing between my on-site environment, cloud-hosted WireGuard server, and &quot;Road Warrior&quot; client devices, and drew from this documentation on implementing WireGuard in GCP↗ as well. The VyOS documentation for configuring the built-in WireGuard interface↗ was also quite helpful to me.
Okay, enough background; let's get this thing going.
Google Cloud Setup Instance Deployment I started by logging into my Google Cloud account at https://console.cloud.google.com↗, and proceeded to create a new project (named wireguard) to keep my WireGuard-related resources together. I then navigated to Compute Engine and created a new instance↗ inside that project. The basic setup is:
Attribute Value Name wireguard Region us-east1 Machine Type e2-micro Boot Disk Size 10 GB Boot Disk Image Ubuntu 20.04 LTS The other defaults are fine, but I'll holding off on clicking the friendly blue &quot;Create&quot; button at the bottom and instead click to expand the Networking, Disks, Security, Management, Sole-Tenancy sections to tweak a few more things. Network Configuration Expanding the Networking section of the request form lets me add a new wireguard network tag, which will make it easier to target the instance with a firewall rule later. I also want to enable the IP Forwarding option so that the instance will be able to do router-like things.
By default, the new instance will get assigned a public IP address that I can use to access it externally - but this address is ephemeral so it will change periodically. Normally I'd overcome this by using ddclient to manage its dynamic DNS record, but (looking ahead) VyOS's WireGuard interface configuration↗ unfortunately only supports connecting to an IP rather than a hostname. That means I'll need to reserve a static IP address for my instance.
I can do that by clicking on the Default network interface to expand the configuration. While I'm here, I'll first change the Network Service Tier from Premium to Standard to save a bit of money on network egress fees. (This might be a good time to mention that while the compute instance itself is free, I will have to spend about $3/mo for the public IP↗, as well as $0.085/GiB for internet egress via the Standard tier↗ (versus $0.12/GiB on the Premium tier↗). So not entirely free, but still pretty damn cheap for a cloud-hosted VPN that I control completely.)
Anyway, after switching to the cheaper Standard tier I can click on the External IP dropdown and select the option to Create IP Address. I give it the same name as my instance to make it easy to keep up with.
Security Configuration The Security section lets me go ahead and upload an SSH public key that I can then use for logging into the instance once it's running. Of course, that means I'll first need to generate a key pair for this purpose:
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_wireguard # [tl! .cmd] Okay, now that I've got my keys, I can click the Add Item button and paste in the contents of ~/.ssh/id_ed25519_wireguard.pub.
And that's it for the pre-deploy configuration! Time to hit Create to kick it off.
The instance creation will take a couple of minutes but I can go ahead and get the firewall sorted while I wait.
Firewall Google Cloud's default firewall configuration will let me reach my new server via SSH without needing to configure anything, but I'll need to add a new rule to allow the WireGuard traffic. I do this by going to VPC &gt; Firewall and clicking the button at the top to Create Firewall Rule↗. I give it a name (allow-wireguard-ingress), select the rule target by specifying the wireguard network tag I had added to the instance, and set the source range to 0.0.0.0/0. I'm going to use the default WireGuard port so select the udp: checkbox and enter 51820.
I'll click Create and move on.
WireGuard Server Setup Once the Compute Engine &gt; Instances page↗ indicates that the instance is ready, I can make a note of the listed public IP and then log in via SSH:
ssh -i ~/.ssh/id_25519_wireguard {PUBLIC_IP} # [tl! .cmd] Preparation And, as always, I'll first make sure the OS is fully updated before doing anything else:
sudo apt update # [tl! .cmd:1] sudo apt upgrade Then I'll install ufw to easily manage the host firewall, qrencode to make it easier to generate configs for mobile clients, openresolv to avoid this issue↗, and wireguard to, um, guard the wires:
sudo apt install ufw qrencode openresolv wireguard # [tl! .cmd] Configuring the host firewall with ufw is very straight forward:
# First, SSH: # [tl! .nocopy] sudo ufw allow 22/tcp # [tl! .cmd] # and WireGuard: # [tl! .nocopy] sudo ufw allow 51820/udp # [tl! .cmd] # Then turn it on: # [tl! .nocopy] sudo ufw enable # [tl! .cmd] The last preparatory step is to enable packet forwarding in the kernel so that the instance will be able to route traffic between the remote clients and my home network (once I get to that point). I can configure that on-the-fly with:
sudo sysctl -w net.ipv4.ip_forward=1 # [tl! .cmd] To make it permanent, I'll edit /etc/sysctl.conf and uncomment the same line:
sudo vi /etc/sysctl.conf # [tl! .cmd] # Uncomment the next line to enable packet forwarding for IPv4 net.ipv4.ip_forward=1 WireGuard Interface Config I'll switch to the root user, move into the /etc/wireguard directory, and issue umask 077 so that the files I'm about to create will have a very limited permission set (to be accessible by root, and only root):
sudo -i # [tl! .cmd] cd /etc/wireguard # [tl! .cmd_root:1] umask 077 Then I can use the wg genkey command to generate the server's private key, save it to a file called server.key, pass it through wg pubkey to generate the corresponding public key, and save that to server.pub:
wg genkey | tee server.key | wg pubkey &gt; server.pub # [tl! .cmd_root] As I mentioned earlier, WireGuard will create a virtual network interface using an internal network to pass traffic between the WireGuard peers. By convention, that interface is wg0 and it draws its configuration from a file in /etc/wireguard named wg0.conf. I could create a configuration file with a different name and thus wind up with a different interface name as well, but I'll stick with tradition to keep things easy to follow.
The format of the interface configuration file will need to look something like this:
# torchlight! {&#34;lineNumbers&#34;: true} [Interface] # this section defines the local WireGuard interface Address = # CIDR-format IP address of the virtual WireGuard interface ListenPort = # WireGuard listens on this port for incoming traffic (randomized if not specified) PrivateKey = # private key used to encrypt traffic sent to other peers MTU = # packet size DNS = # optional DNS server(s) and search domain(s) used for the VPN PostUp = # command executed by wg-quick wrapper when the interface comes up PostDown = # command executed by wg-quick wrapper when the interface goes down [Peer] # now we&#39;re talking about the other peers connecting to this instance PublicKey = # public key used to decrypt traffic sent by this peer AllowedIPs = # which IPs will be routed to this peer There will be a single [Interface] section in each peer's configuration file, but they may include multiple [Peer] sections. For my config, I'll use the 10.200.200.0/24 network for WireGuard, and let this server be 10.200.200.1, the VyOS router in my home lab 10.200.200.2, and I'll assign IPs to the other peers from there. I found a note that Google Cloud uses an MTU size of 1460 bytes so that's what I'll set on this end. I'm going to configure WireGuard to use the VyOS router as the DNS server, and I'll specify my internal lab.bowdre.net search domain. Finally, I'll leverage the PostUp and PostDown directives to enable and disable NAT so that the server will be able to forward traffic between networks for me.
So here's the start of my GCP WireGuard server's /etc/wireguard/wg0.conf:
# torchlight! {&#34;lineNumbers&#34;: true} # /etc/wireguard/wg0.conf [Interface] Address = 10.200.200.1/24 ListenPort = 51820 PrivateKey = {GCP_PRIVATE_KEY} MTU = 1460 DNS = 10.200.200.2, lab.bowdre.net PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens4 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o ens4 -j MASQUERADE PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o ens4 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o ens4 -j MASQUERADE I don't have any other peers ready to add to this config yet, but I can go ahead and bring up the interface all the same. I'm going to use the wg-quick wrapper instead of calling wg directly since it simplifies a bit of the configuration, but first I'll need to enable the wg-quick@{INTERFACE} service so that it will run automatically at startup:
systemctl enable wg-quick@wg0 # [tl! .cmd_root:1] systemctl start wg-quick@wg0 I can now bring up the interface with wg-quick up wg0 and check the status with wg show:
wg-quick up wg0 # [tl! .cmd_root] [#] ip link add wg0 type wireguard # [tl! .nocopy:start] [#] wg setconf wg0 /dev/fd/63 [#] ip -4 address add 10.200.200.1/24 dev wg0 [#] ip link set mtu 1460 up dev wg0 [#] resolvconf -a wg0 -m 0 -x [#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens4 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o ens4 -j MASQUERADE # [tl! .nocopy:end] wg show # [tl! .cmd_root] interface: wg0 # [tl! .nocopy:3] public key: {GCP_PUBLIC_IP} private key: (hidden) listening port: 51820 I'll come back here once I've got a peer config to add.
Configure VyoS Router as WireGuard Peer Comparatively, configuring WireGuard on VyOS is a bit more direct. I'll start by entering configuration mode and generating and binding a key pair for this interface:
configure # [tl! .cmd_root:1] run generate pki wireguard key-pair install interface wg0 And then I'll configure the rest of the options needed for the interface:
set interfaces wireguard wg0 address &#39;10.200.200.2/24&#39; # [tl! .cmd_root:start] set interfaces wireguard wg0 description &#39;VPN to GCP&#39; set interfaces wireguard wg0 peer wireguard-gcp address &#39;{GCP_PUBLIC_IP}&#39; set interfaces wireguard wg0 peer wireguard-gcp allowed-ips &#39;0.0.0.0/0&#39; set interfaces wireguard wg0 peer wireguard-gcp persistent-keepalive &#39;25&#39; set interfaces wireguard wg0 peer wireguard-gcp port &#39;51820&#39; set interfaces wireguard wg0 peer wireguard-gcp public-key &#39;{GCP_PUBLIC_KEY}&#39; # [tl! .cmd_root:end] Note that this time I'm allowing all IPs (0.0.0.0/0) so that this WireGuard interface will pass traffic intended for any destination (whether it's local, remote, or on the Internet). And I'm specifying a 25-second persistent-keepalive interval↗ to help ensure that this NAT-ed tunnel stays up even when it's not actively passing traffic - after all, I'll need the GCP-hosted peer to be able to initiate the connection so I can access the home network remotely.
While I'm at it, I'll also add a static route to ensure traffic for the WireGuard tunnel finds the right interface:
set protocols static route 10.200.200.0/24 interface wg0 # [tl! .cmd_root] And I'll add the new wg0 interface as a listening address for the VyOS DNS forwarder:
set service dns forwarding listen-address &#39;10.200.200.2&#39; # [tl! .cmd_root] I can use the compare command to verify the changes I've made, and then apply and save the updated config:
compare # [tl! .cmd_root:2] commit save I can check the status of WireGuard on VyOS (and view the public key!) like so:
show interfaces wireguard wg0 summary # [tl! .cmd_root] interface: wg0 # [tl! .nocopy:start] public key: {VYOS_PUBLIC_KEY} private key: (hidden) listening port: 43543 peer: {GCP_PUBLIC_KEY} endpoint: {GCP_PUBLIC_IP}:51820 allowed ips: 0.0.0.0/0 transfer: 0 B received, 592 B sent persistent keepalive: every 25 seconds # [tl! .nocopy:end] See? That part was much easier to set up! But it doesn't look like it's actually passing traffic yet... because while the VyOS peer has been configured with the GCP peer's public key, the GCP peer doesn't know anything about the VyOS peer yet.
So I'll copy {VYOS_PUBLIC_KEY} and SSH back to the GCP instance to finish that configuration. Once I'm there, I can edit /etc/wireguard/wg0.conf as root and add in a new [Peer] section at the bottom, like this:
[Peer] # VyOS PublicKey = {VYOS_PUBLIC_KEY} AllowedIPs = 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16 This time, I'm telling WireGuard that the new peer has IP 10.200.200.2 but that it should also get traffic destined for the 192.168.1.0/24 and 172.16.0.0/16 networks, my home and lab networks. Again, the AllowedIPs parameter is used for WireGuard's Cryptokey Routing so that it can keep track of which traffic goes to which peers (and which key to use for encryption).
After saving the file, I can either restart WireGuard by bringing the interface down and back up (wg-quick down wg0 &amp;&amp; wg-quick up wg0), or I can reload it on the fly with:
sudo -i # [tl! .cmd] wg syncconf wg0 &lt;(wg-quick strip wg0) # [tl! .cmd_root] (I can't just use wg syncconf wg0 directly since /etc/wireguard/wg0.conf includes the PostUp/PostDown commands which can only be parsed by the wg-quick wrapper, so I'm using wg-quick strip {INTERFACE} to grab the contents of the config file, remove the problematic bits, and then pass what's left to the wg syncconf {INTERFACE} command to update the current running config.)
Now I can check the status of WireGuard on the GCP end:
wg show # [tl! .cmd_root] interface: wg0 # [tl! .nocopy:start] public key: {GCP_PUBLIC_KEY} private key: (hidden) listening port: 51820 peer: {VYOS_PUBLIC_KEY} endpoint: {VYOS_PUBLIC_IP}:43990 allowed ips: 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16 latest handshake: 55 seconds ago transfer: 1.23 KiB received, 368 B sent # [tl! .nocopy:end] Hey, we're passing traffic now! And I can verify that I can ping stuff on my home and lab networks from the GCP instance:
ping -c 1 192.168.1.5 # [tl! .cmd] PING 192.168.1.5 (192.168.1.5) 56(84) bytes of data. # [tl! .nocopy:start] 64 bytes from 192.168.1.5: icmp_seq=1 ttl=127 time=35.6 ms --- 192.168.1.5 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 35.598/35.598/35.598/0.000 ms # [tl! .nocopy:end] ping -c 1 172.16.10.1 # [tl! .cmd] PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data. # [tl! .nocopy:start] 64 bytes from 172.16.10.1: icmp_seq=1 ttl=64 time=35.3 ms --- 172.16.10.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 35.275/35.275/35.275/0.000 ms # [tl! .nocopy:end] Cool!
Adding Additional Peers So my GCP and VyOS peers are talking, but the ultimate goals here are for my Chromebook to have access to my homelab resources while away from home, and for my phones to have secure internet access when connected to WiFi networks I don't control. That means adding at least two more peers to the GCP server. WireGuard offers downloads↗ for just about every operating system you can imagine, but I'll be using the Android app↗ for both the Chromebook and phones.
Chromebook The first step is to install the WireGuard Android app.
Note: the version of the WireGuard app currently available on the Play Store (v1.0.20210926) has an issue↗ on Chrome OS that causes it to not pass traffic after the Chromebook has resumed from sleep. The workaround for this is to install an older version of the app (1.0.20210506) which can be obtained from F-Droid↗. Doing so requires having the Linux environment enabled on Chrome OS and the Develop Android Apps &gt; Enable ADB Debugging option enabled in the Chrome OS settings. The process for sideloading apps is detailed here↗.
Once it's installed, I open the app and click the &quot;Plus&quot; button to create a new tunnel, and select the Create from scratch option. I click the circle-arrows icon at the right edge of the Private key field, and that automatically generates this peer's private and public key pair. Simply clicking on the Public key field will automatically copy the generated key to my clipboard, which will be useful for sharing it with the server. Otherwise I fill out the Interface section similarly to what I've done already:
Parameter Value Name wireguard-gcp Private key {CB_PRIVATE_KEY} Public key {CB_PUBLIC_KEY} Addresses 10.200.200.3/24 Listen port DNS servers 10.200.200.2 MTU I then click the Add Peer button to tell this client about the peer it will be connecting to - the GCP-hosted instance:
Parameter Value Public key {GCP_PUBLIC_KEY} Pre-shared key Persistent keepalive Endpoint {GCP_PUBLIC_IP}:51820 Allowed IPs 0.0.0.0/0 I shouldn't need the keepalive for the &quot;Road Warrior&quot; peers connecting to the GCP peer, but I can always set that later if I run into stability issues.
Now I can go ahead and save this configuration, but before I try (and fail) to connect I first need to tell the cloud-hosted peer about the Chromebook. So I fire up an SSH session to my GCP instance, become root, and edit the WireGuard configuration to add a new [Peer] section.
sudo -i # [tl! .cmd] vi /etc/wireguard/wg0.conf # [tl! .cmd_root] Here's the new section that I'll add to the bottom of the config:
[Peer] # Chromebook PublicKey = {CB_PUBLIC_KEY} AllowedIPs = 10.200.200.3/32 This one is acting as a single-node endpoint (rather than an entryway into other networks like the VyOS peer) so setting AllowedIPs to only the peer's IP makes sure that WireGuard will only send it traffic specifically intended for this peer.
So my complete /etc/wireguard/wg0.conf looks like this so far:
# torchlight! {&#34;lineNumbers&#34;: true} # /etc/wireguard/wg0.conf [Interface] Address = 10.200.200.1/24 ListenPort = 51820 PrivateKey = {GCP_PRIVATE_KEY} MTU = 1460 DNS = 10.200.200.2, lab.bowdre.net PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens4 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o ens4 -j MASQUERADE PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o ens4 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o ens4 -j MASQUERADE [Peer] # [tl! focus:start] # VyOS PublicKey = {VYOS_PUBLIC_KEY} AllowedIPs = 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16 [Peer] # Chromebook PublicKey = {CB_PUBLIC_KEY} AllowedIPs = 10.200.200.3/32 # [tl! focus:end] Now to save the file and reload the WireGuard configuration again:
wg syncconf wg0 &lt;(wg-quick strip wg0) # [tl! .cmd_root] At this point I can activate the connection in the WireGuard Android app, wait a few seconds, and check with wg show to confirm that the tunnel has been established successfully:
wg show # [tl! .cmd_root] interface: wg0 # [tl! .nocopy:start] public key: {GCP_PUBLIC_KEY} private key: (hidden) listening port: 51820 peer: {VYOS_PUBLIC_KEY} endpoint: {VYOS_PUBLIC_IP}:43990 allowed ips: 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16 latest handshake: 1 minute, 55 seconds ago transfer: 200.37 MiB received, 16.32 MiB sent peer: {CB_PUBLIC_KEY} endpoint: {CB_PUBLIC_IP}:33752 allowed ips: 10.200.200.3/32 latest handshake: 48 seconds ago transfer: 169.17 KiB received, 808.33 KiB sent # [tl! .nocopy:end] And I can even access my homelab when not at home! Android Phone Being able to copy-and-paste the required public keys between the WireGuard app and the SSH session to the GCP instance made it relatively easy to set up the Chromebook, but things could be a bit trickier on a phone without that kind of access. So instead I will create the phone's configuration on the WireGuard server in the cloud, render that config file as a QR code, and simply scan that through the phone's WireGuard app to import the settings.
I'll start by SSHing to the GCP instance, elevating to root, setting the restrictive umask again, and creating a new folder to store client configurations.
sudo -i # [tl! .cmd] umask 077 # [tl! .cmd_root:2] mkdir /etc/wireguard/clients cd /etc/wireguard/clients As before, I'll use the built-in wg commands to generate the private and public key pair:
wg genkey | tee phone1.key | wg pubkey &gt; phone1.pub # [tl! .cmd_root] I can then use those keys to assemble the config for the phone:
# torchlight! {&#34;lineNumbers&#34;: true} # /etc/wireguard/clients/phone1.conf [Interface] PrivateKey = {PHONE1_PRIVATE_KEY} Address = 10.200.200.4/24 DNS = 10.200.200.2, lab.bowdre.net [Peer] PublicKey = {GCP_PUBLIC_KEY} AllowedIPs = 0.0.0.0/0 Endpoint = {GCP_PUBLIC_IP}:51820 I'll also add the interface address and corresponding public key to a new [Peer] section of /etc/wireguard/wg0.conf:
[Peer] PublicKey = {PHONE1_PUBLIC_KEY} AllowedIPs = 10.200.200.4/32 And reload the WireGuard config:
wg syncconf wg0 &lt;(wg-quick strip wg0) # [tl! .cmd_root] Back in the clients/ directory, I can use qrencode to render the phone configuration file (keys and all!) as a QR code:
qrencode -t ansiutf8 &lt; phone1.conf # [tl! .cmd_root] And then I just open the WireGuard app on my phone and use the Scan from QR Code option. After a successful scan, it'll prompt me to name the new tunnel, and then I should be able to connect right away. I can even access my vSphere lab environment - not that it offers a great mobile experience... Before moving on too much further, though, I'm going to clean up the keys and client config file that I generated on the GCP instance. It's not great hygiene to keep a private key stored on the same system it's used to access.
rm -f /etc/wireguard/clients/* # [tl! .cmd_root] Bonus: Automation! I've written before about a set of Tasker↗ profiles I put together so that my phone would automatically connect to a VPN whenever it connects to a WiFi network I don't control. It didn't take much effort at all to adapt the profile to work with my new WireGuard setup.
Two quick pre-requisites first:
Open the WireGuard Android app, tap the three-dot menu button at the top right, expand the Advanced section, and enable the Allow remote control apps so that Tasker will be permitted to control WireGuard. Exclude the WireGuard app from Android's battery optimization so that it doesn't have any problems running in the background. On (Pixel-flavored) Android 12, this can be done by going to Settings &gt; Apps &gt; See all apps &gt; WireGuard &gt; Battery and selecting the Unrestricted option. On to the Tasker config. The only changes will be in the VPN on Strange Wifi profile. I'll remove the OpenVPN-related actions from the Enter and Exit tasks and replace them with the built-in Tasker &gt; Tasker Function WireGuard Set Tunnel action.
For the Enter task, I'll set the tunnel status to true and specify the name of the tunnel as configured in the WireGuard app; the Exit task gets the status set to false to disable the tunnel. Both actions will be conditional upon the %TRUSTED_WIFI variable being unset. Profile: VPN on Strange WiFi Settings: Notification: no State: Wifi Connected [ SSID:* MAC:* IP:* Active:Any ] Enter Task: ConnectVPN A1: Tasker Function [ Function: WireGuardSetTunnel(true,wireguard-gcp) ] If [ %TRUSTED_WIFI !Set ] Exit Task: DisconnectVPN A1: Tasker Function [ Function: WireGuardSetTunnel(false,wireguard-gcp) ] If [ %TRUSTED_WIFI !Set ] Automagic!
Other Peers Any additional peers that need to be added in the future will likely follow one of the above processes. The steps are always to generate the peer's key pair, use the private key to populate the [Interface] portion of the peer's config, configure the [Peer] section with the public key, allowed IPs, and endpoint address of the peer it will be connecting to, and then to add the new peer's public key and internal WireGuard IP to a new [Peer] section of the existing peer's config.
`,description:"",url:"https://runtimeterror.dev/cloud-based-wireguard-vpn-remote-homelab-access/"},"https://runtimeterror.dev/run-scripts-in-guest-os-with-vra-abx-actions/":{title:"Run scripts in guest OS with vRA ABX Actions",tags:["vra","abx","powershell","vmware"],content:"Thus far in my vRealize Automation project, I've primarily been handing the payload over to vRealize Orchestrator to do the heavy lifting on the back end. This approach works really well for complex multi-part workflows (like when generating unique hostnames), but it may be overkill for more linear tasks (such as just running some simple commands inside of a deployed guest OS). In this post, I'll explore how I use vRA Action Based eXtensibility (ABX)↗ to do just that.\nThe Goal My ABX action is going to use PowerCLI to perform a few steps inside a deployed guest OS (Windows-only for this demonstration):\nAuto-update VM tools (if needed). Add specified domain users/groups to the local Administrators group. Extend the C: volume to fill the VMDK. Set up Windows Firewall to enable remote access. Create a scheduled task to attempt to automatically apply any available Windows updates. Template Changes Cloud Assembly I'll need to start by updating the cloud template so that the requester can input an (optional) list of admin accounts to be added to the VM, and to enable specifying a disk size to override the default from the source VM template.\nI will also add some properties to tell PowerCLI (and the Invoke-VmScript cmdlet in particular) how to connect to the VM.\nInputs section I'll kick this off by going into Cloud Assembly and editing the WindowsDemo template I've been working on for the past few eons. I'll add a diskSize input:\n# torchlight! {&#34;lineNumbers&#34;: true} formatVersion: 1 inputs: site: [...] image: [...] size: [...] diskSize: # [tl! focus:5] title: &#39;System drive size&#39; default: 60 type: integer minimum: 60 maximum: 200 network: [...] adJoin: [...] [...] The default value is set to 60GB to match the VMDK attached to the source template; that's also the minimum value since shrinking disks gets messy.\nI'll also drop in an adminsList input at the bottom of the section:\n# torchlight! {&#34;lineNumbers&#34;: true} [...] poc_email: [...] ticket: [...] adminsList: # [tl! focus:4] type: string title: Administrators description: Comma-separated list of domain accounts/groups which need admin access to this server. default: &#39;&#39; resources: Cloud_vSphere_Machine_1: [...] Resources section In the Resources section of the cloud template, I'm going to add a few properties that will tell the ABX script how to connect to the appropriate vCenter and then the VM.\nvCenter: The vCenter server where the VM will be deployed, and thus the server which PowerCLI will authenticate against. In this case, I've only got one vCenter, but a larger environment might have multiples. Defining this in the cloud template makes it easy to select automagically if needed. (For instance, if I had a bow-vcsa and a dre-vcsa for my different sites, I could do something like vCenter: '${input.site}-vcsa.lab.bowdre.net' here.) vCenterUser: The username with rights to the VM in vCenter. Again, this doesn't have to be a static assignment. templateUser: This is the account that will be used by Invoke-VmScript to log in to the guest OS. My template will use the default Administrator account for non-domain systems, but the lab\\vra service account on domain-joined systems (using the adJoin input I set up earlier). I'll also include the adminsList input from earlier so that can get passed to ABX as well. And I'm going to add in an adJoin property (mapped to the existing input.adJoin) so that I'll have that to work with later.\n# torchlight! {&#34;lineNumbers&#34;: true} [...] resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine properties: image: &#39;${input.image}&#39; flavor: &#39;${input.size}&#39; site: &#39;${input.site}&#39; vCenter: vcsa.lab.bowdre.net # [tl! focus:3] vCenterUser: [email protected] templateUser: &#39;${input.adJoin ? &#34;vra@lab&#34; : &#34;Administrator&#34;}&#39; adminsList: &#39;${input.adminsList}&#39; environment: &#39;${input.environment}&#39; function: &#39;${input.function}&#39; app: &#39;${input.app}&#39; adJoin: &#39;${input.adJoin}&#39; ignoreActiveDirectory: &#39;${!input.adJoin}&#39; [...] And I will add in a storage property as well which will automatically adjust the deployed VMDK size to match the specified input:\n# torchlight! {&#34;torchlightAnnotations&#34;: false} # torchlight! {&#34;lineNumbers&#34;: true} [...] description: &#39;${input.description}&#39; networks: [...] constraints: [...] storage: # [tl! focus:1] bootDiskCapacityInGB: &#39;${input.diskSize}&#39; Cloud_vSphere_Network_1: type: Cloud.vSphere.Network properties: [...] [...] Complete template Okay, all together now:\n# torchlight! {&#34;lineNumbers&#34;: true} formatVersion: 1 inputs: site: type: string title: Site enum: - BOW - DRE image: type: string title: Operating System oneOf: - title: Windows Server 2019 const: ws2019 default: ws2019 size: title: Resource Size type: string oneOf: - title: &#39;Micro [1vCPU|1GB]&#39; const: micro - title: &#39;Tiny [1vCPU|2GB]&#39; const: tiny - title: &#39;Small [2vCPU|2GB]&#39; const: small default: small diskSize: title: &#39;System drive size&#39; default: 60 type: integer minimum: 60 maximum: 200 network: title: Network type: string adJoin: title: Join to AD domain type: boolean default: true staticDns: title: Create static DNS record type: boolean default: false environment: type: string title: Environment oneOf: - title: Development const: D - title: Testing const: T - title: Production const: P default: D function: type: string title: Function Code oneOf: - title: Application (APP) const: APP - title: Desktop (DSK) const: DSK - title: Network (NET) const: NET - title: Service (SVS) const: SVS - title: Testing (TST) const: TST default: TST app: type: string title: Application Code minLength: 3 maxLength: 3 default: xxx description: type: string title: Description description: Server function/purpose default: Testing and evaluation poc_name: type: string title: Point of Contact Name default: Jack Shephard poc_email: type: string title: Point of Contact Email default: [email protected] ticket: type: string title: Ticket/Request Number default: 4815162342 adminsList: type: string title: Administrators description: Comma-separated list of domain accounts/groups which need admin access to this server. default: &#39;&#39; resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine properties: image: &#39;${input.image}&#39; flavor: &#39;${input.size}&#39; site: &#39;${input.site}&#39; vCenter: vcsa.lab.bowdre.net vCenterUser: [email protected] templateUser: &#39;${input.adJoin ? &#34;vra@lab&#34; : &#34;Administrator&#34;}&#39; adminsList: &#39;${input.adminsList}&#39; environment: &#39;${input.environment}&#39; function: &#39;${input.function}&#39; app: &#39;${input.app}&#39; adJoin: &#39;${input.adJoin}&#39; ignoreActiveDirectory: &#39;${!input.adJoin}&#39; activeDirectory: relativeDN: &#39;${&#34;OU=Servers,OU=Computers,OU=&#34; + input.site + &#34;,OU=LAB&#34;}&#39; customizationSpec: &#39;${input.adJoin ? &#34;vra-win-domain&#34; : &#34;vra-win-workgroup&#34;}&#39; staticDns: &#39;${input.staticDns}&#39; dnsDomain: lab.bowdre.net poc: &#39;${input.poc_name + &#34; (&#34; + input.poc_email + &#34;)&#34;}&#39; ticket: &#39;${input.ticket}&#39; description: &#39;${input.description}&#39; networks: - network: &#39;${resource.Cloud_vSphere_Network_1.id}&#39; assignment: static constraints: - tag: &#39;comp:${to_lower(input.site)}&#39; storage: bootDiskCapacityInGB: &#39;${input.diskSize}&#39; Cloud_vSphere_Network_1: type: Cloud.vSphere.Network properties: networkType: existing constraints: - tag: &#39;net:${input.network}&#39; With the template sorted, I need to assign it a new version and release it to the catalog so that the changes will be visible to Service Broker: Service Broker custom form I now need to also make some updates to the custom form configuration in Service Broker so that the new fields will appear on the request form. First things first, though: after switching to the Service Broker UI, I go to Content &amp; Policies &gt; Content Sources, open the linked content source, and click the Save &amp; Import button to force Service Broker to pull in the latest versions from Cloud Assembly.\nI can then go to Content, click the three-dot menu next to my WindowsDemo item, and select the Customize Form option. I drag-and-drop the System drive size from the Schema Elements section onto the canvas, placing it directly below the existing Resource Size field. With the field selected, I use the Properties section to edit the label with a unit so that users will better understand what they're requesting. On the Values tab, I change the Step option to 5 so that we won't wind up with users requesting a disk size of 62.357 GB or anything crazy like that. I'll drag-and-drop the Administrators field to the canvas, and put it right below the VM description: I only want this field to be visible if the VM is going to be joined to the AD domain, so I'll set the Visibility accordingly: That should be everything I need to add to the custom form so I'll be sure to hit that big Save button before moving on.\nExtensibility Okay, now it's time to actually make the stuff work on the back end. But before I get to writing the actual script, there's something else I'll need to do first. Remember how I added properties to store the usernames for vCenter and the VM template in the cloud template? My ABX action will also need to know the passwords for those accounts. I didn't add those to the cloud template since anything added as a property there (even if flagged as a secret!) would be visible in plain text to any external handlers (like vRO). Instead, I'll store those passwords as encrypted Action Constants.\nAction Constants From the vRA Cloud Assembly interface, I'll navigate to Extensibility &gt; Library &gt; Actions and then click the Action Constants button up top. I can then click New Action Constant and start creating the ones I need:\nvCenterPassword: for logging into vCenter. templatePassWinWorkgroup: for logging into non-domain VMs. templatePassWinDomain: for logging into VMs with the designated domain credentials. I'll make sure to enable the Encrypt the action constant value toggle for each so they'll be protected. Once all those constants are created I can move on to the meat of this little project:\nABX Action I'll click back to Extensibility &gt; Library &gt; Actions and then + New Action. I give the new action a clever title and description: I then hit the language dropdown near the top left and select to use powershell so that I can use those sweet, sweet PowerCLI cmdlets. And I'll pop over to the right side to map the Action Constants I created earlier so that I can reference them in the script I'm about to write: Now for The Script:\n# torchlight! {&#34;lineNumbers&#34;: true} &lt;# vRA 8.x ABX action to perform certain in-guest actions post-deploy: Windows: - auto-update VM tools - add specified domain users/groups to local Administrators group - extend C: volume to fill disk - set up remote access - create a scheduled task to (attempt to) apply Windows updates ## Action Secrets: templatePassWinDomain # password for domain account with admin rights to the template (domain-joined deployments) templatePassWinWorkgroup # password for local account with admin rights to the template (standalone deployments) vCenterPassword # password for vCenter account passed from the cloud template ## Action Inputs: ## Inputs from deployment: resourceNames[0] # VM name [BOW-DVRT-XXX003] customProperties.vCenterUser # user for connecting to vCenter [lab\\vra] customProperties.vCenter # vCenter instance to connect to [vcsa.lab.bowdre.net] customProperties.dnsDomain # long-form domain name [lab.bowdre.net] customProperties.adminsList # list of domain users/groups to be added as local admins [john, lab\\vra, vRA-Admins] customProperties.adJoin # boolean to determine if the system will be joined to AD (true) or not (false) customProperties.templateUser # username used for connecting to the VM through vmtools [Administrator] / [root] #&gt; function handler($context, $inputs) { # Initialize global variables $vcUser = $inputs.customProperties.vCenterUser $vcPassword = $context.getSecret($inputs.&#34;vCenterPassword&#34;) $vCenter = $inputs.customProperties.vCenter # Create vmtools connection to the VM $vmName = $inputs.resourceNames[0] Connect-ViServer -Server $vCenter -User $vcUser -Password $vcPassword -Force $vm = Get-VM -Name $vmName Write-Host &#34;Waiting for VM Tools to start...&#34; if (-not (Wait-Tools -VM $vm -TimeoutSeconds 180)) { Write-Error &#34;Unable to establish connection with VM tools&#34; -ErrorAction Stop } # Detect OS type $count = 0 While (!$osType) { Try { $osType = ($vm | Get-View).Guest.GuestFamily.ToString() $toolsStatus = ($vm | Get-View).Guest.ToolsStatus.ToString() } Catch { # 60s timeout if ($count -ge 12) { Write-Error &#34;Timeout exceeded while waiting for tools.&#34; -ErrorAction Stop break } Write-Host &#34;Waiting for tools...&#34; $count++ Sleep 5 } } Write-Host &#34;$vmName is a $osType and its tools status is $toolsStatus.&#34; # Update tools on Windows if out of date if ($osType.Equals(&#34;windowsGuest&#34;) -And $toolsStatus.Equals(&#34;toolsOld&#34;)) { Write-Host &#34;Updating VM Tools...&#34; Update-Tools $vm Write-Host &#34;Waiting for VM Tools to start...&#34; if (-not (Wait-Tools -VM $vm -TimeoutSeconds 180)) { Write-Error &#34;Unable to establish connection with VM tools&#34; -ErrorAction Stop } } # Run OS-specific tasks if ($osType.Equals(&#34;windowsGuest&#34;)) { # Initialize Windows variables $domainLong = $inputs.customProperties.dnsDomain $adminsList = $inputs.customProperties.adminsList $adJoin = $inputs.customProperties.adJoin $templateUser = $inputs.customProperties.templateUser $templatePassword = $adJoin.Equals(&#34;true&#34;) ? $context.getSecret($inputs.&#34;templatePassWinDomain&#34;) : $context.getSecret($inputs.&#34;templatePassWinWorkgroup&#34;) # Add domain accounts to local administrators group if ($adminsList.Length -gt 0 -And $adJoin.Equals(&#34;true&#34;)) { # Standardize users entered without domain as DOMAIN\\username if ($adminsList.Length -gt 0) { $domainShort = $domainLong.split(&#39;.&#39;)[0] $adminsArray = @(($adminsList -Split &#39;,&#39;).Trim()) For ($i=0; $i -lt $adminsArray.Length; $i++) { If ($adminsArray[$i] -notmatch &#34;$domainShort.*\\\\&#34; -And $adminsArray[$i] -notmatch &#34;@$domainShort&#34;) { $adminsArray[$i] = $domainShort + &#34;\\&#34; + $adminsArray[$i] } } $admins = &#39;&#34;{0}&#34;&#39; -f ($adminsArray -join &#39;&#34;,&#34;&#39;) Write-Host &#34;Administrators: $admins&#34; } $adminScript = &#34;Add-LocalGroupMember -Group Administrators -Member $admins&#34; Start-Sleep -s 10 Write-Host &#34;Attempting to add administrator accounts...&#34; $runAdminScript = Invoke-VMScript -VM $vm -ScriptText $adminScript -GuestUser $templateUser -GuestPassword $templatePassword if ($runAdminScript.ScriptOutput.Length -eq 0) { Write-Host &#34;Successfully added [$admins] to Administrators group.&#34; } else { Write-Host &#34;Attempt to add [$admins] to Administrators group completed with warnings:`n&#34; $runAdminScript.ScriptOutput &#34;`n&#34; } } else { Write-Host &#34;No admins to add...&#34; } # Extend C: volume to fill system drive $partitionScript = &#34;`$Partition = Get-Volume -DriveLetter C | Get-Partition; `$Partition | Resize-Partition -Size (`$Partition | Get-PartitionSupportedSize).sizeMax&#34; Start-Sleep -s 10 Write-Host &#34;Attempting to extend system volume...&#34; $runPartitionScript = Invoke-VMScript -VM $vm -ScriptText $partitionScript -GuestUser $templateUser -GuestPassword $templatePassword if ($runPartitionScript.ScriptOutput.Length -eq 0) { Write-Host &#34;Successfully extended system partition.&#34; } else { Write-Host &#34;Attempt to extend system volume completed with warnings:`n&#34; $runPartitionScript.ScriptOutput &#34;`n&#34; } # Set up remote access $remoteScript = &#34;Enable-NetFirewallRule -DisplayGroup `&#34;Remote Desktop`&#34; Enable-NetFirewallRule -DisplayGroup `&#34;Windows Management Instrumentation (WMI)`&#34; Enable-NetFirewallRule -DisplayGroup `&#34;File and Printer Sharing`&#34; Enable-PsRemoting Set-ItemProperty -Path &#39;HKLM:\\System\\CurrentControlSet\\Control\\Terminal Server&#39; -name `&#34;fDenyTSConnections`&#34; -Value 0&#34; Start-Sleep -s 10 Write-Host &#34;Attempting to enable remote access (RDP, WMI, File and Printer Sharing, PSRemoting)...&#34; $runRemoteScript = Invoke-VMScript -VM $vm -ScriptText $remoteScript -GuestUser $templateUser -GuestPassword $templatePassword if ($runRemoteScript.ScriptOutput.Length -eq 0) { Write-Host &#34;Successfully enabled remote access.&#34; } else { Write-Host &#34;Attempt to enable remote access completed with warnings:`n&#34; $runRemoteScript.ScriptOutput &#34;`n&#34; } # Create scheduled task to apply updates $updateScript = &#34;`$action = New-ScheduledTaskAction -Execute &#39;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe&#39; -Argument &#39;-NoProfile -WindowStyle Hidden -Command `&#34;&amp; {Install-WUUpdates -Updates (Start-WUScan)}`&#34;&#39; `$trigger = New-ScheduledTaskTrigger -Once -At ([DateTime]::Now.AddMinutes(1)) `$settings = New-ScheduledTaskSettingsSet -Compatibility Win8 -Hidden Register-ScheduledTask -Action `$action -Trigger `$trigger -Settings `$settings -TaskName `&#34;Initial_Updates`&#34; -User `&#34;NT AUTHORITY\\SYSTEM`&#34; -RunLevel Highest `$task = Get-ScheduledTask -TaskName `&#34;Initial_Updates`&#34; `$task.Triggers[0].StartBoundary = [DateTime]::Now.AddMinutes(1).ToString(`&#34;yyyy-MM-dd&#39;T&#39;HH:mm:ss`&#34;) `$task.Triggers[0].EndBoundary = [DateTime]::Now.AddHours(3).ToString(`&#34;yyyy-MM-dd&#39;T&#39;HH:mm:ss`&#34;) `$task.Settings.AllowHardTerminate = `$True `$task.Settings.DeleteExpiredTaskAfter = &#39;PT0S&#39; `$task.Settings.ExecutionTimeLimit = &#39;PT2H&#39; `$task.Settings.Volatile = `$False `$task | Set-ScheduledTask&#34; Start-Sleep -s 10 Write-Host &#34;Creating a scheduled task to apply updates...&#34; $runUpdateScript = Invoke-VMScript -VM $vm -ScriptText $updateScript -GuestUser $templateUser -GuestPassword $templatePassword Write-Host &#34;Created task:`n&#34; $runUpdateScript.ScriptOutput &#34;`n&#34; } elseif ($osType.Equals(&#34;linuxGuest&#34;)) { #TODO Write-Host &#34;Linux systems not supported by this action... yet&#34; } # Cleanup connection Disconnect-ViServer -Server $vCenter -Force -Confirm:$false } I like to think that it's fairly well documented (but I've also been staring at / tweaking this for a while); here's the gist of what it's doing:\nCapture vCenter login credentials from the Action Constants and the customProperties of the deployment (from the cloud template). Use those creds to Connect-ViServer to the vCenter instance. Find the VM object which matches the resourceName from the vRA deployment. Wait for VM tools to be running and accessible on that VM. Determine the OS type of the VM (Windows/Linux). If it's Windows and the tools are out of date, update them and wait for the reboot to complete. If it's Windows, move on: If it needs to add accounts to the Administrators group, assemble the needed script and run it in the guest via Invoke-VmScript. Assemble a script to expand the C: volume to fill whatever size VMDK is attached as HDD1, and run it in the guest via Invoke-VmScript. Assemble a script to set common firewall exceptions for remote access, and run it in the guest via Invoke-VmScript. Assemble a script to schedule a task to (attempt to) apply Windows updates, and run it in the guest via Invoke-VmScript. It wouldn't be hard to customize the script to perform different actions (or even run against Linux systems - just set $whateverScript = &quot;apt update &amp;&amp; apt upgrade&quot; (or whatever) and call it with $runWhateverScript = Invoke-VMScript -VM $vm -ScriptText $whateverScript -GuestUser $templateUser -GuestPassword $templatePassword), but this is as far as I'm going to take it for this demo.\nEvent subscription Before I can test the new action, I'll need to first add an extensibility subscription so that the ABX action will get called during the deployment. So I head to Extensibility &gt; Subscriptions and click the New Subscription button. I'll be using this to call my new configureGuest action - so I'll name the subscription Configure Guest. I tie it to the Compute Post Provision event, and bind my action: I do have another subsciption on that event already, VM Post-Provisioning which is used to modify the VM object with notes and custom attributes. I'd like to make sure that my work inside the guest happens after that other subscription is completed, so I'll enable blocking and give it a priority of 2: After hitting the Save button, I go back to that other VM Post-Provisioning subscription, set it to enable blocking, and give it a priority of 1: This will ensure that the new subscription fires after the older one completes, and that should avoid any conflicts between the two.\nTesting Alright, now let's see if it worked. I head into Service Broker to submit the deployment request: Note that I've set the disk size to 65GB (up from the default of 60), and I'm adding lab\\testy as a local admin on the deployed system.\nOnce the deployment finishes, I can switch back to Cloud Assembly and check Extensibility &gt; Activity &gt; Action Runs and then click on the configureGuest run to see how it did. It worked!\nThe Log tab lets me see the progress as the execution progresses:\nLogging in to server. logged in to server vcsa.lab.bowdre.net:443 Read-only file system 09/03/2021 19:08:27	Get-VM	Finished execution 09/03/2021 19:08:27	Get-VM Waiting for VM Tools to start... 09/03/2021 19:08:29	Wait-Tools	5222b516-ae2c-5740-2926-77cd21441f27 09/03/2021 19:08:29	Wait-Tools	Finished execution 09/03/2021 19:08:29	Wait-Tools 09/03/2021 19:08:29	Get-View	Finished execution 09/03/2021 19:08:29	Get-View 09/03/2021 19:08:29	Get-View	Finished execution 09/03/2021 19:08:29	Get-View BOW-PSVS-XXX001 is a windowsGuest and its tools status is toolsOld. Updating VM Tools... 09/03/2021 19:08:30	Update-Tools	5222b516-ae2c-5740-2926-77cd21441f27 09/03/2021 19:08:30	Update-Tools	Finished execution 09/03/2021 19:08:30	Update-Tools Waiting for VM Tools to start... 09/03/2021 19:09:00	Wait-Tools	5222b516-ae2c-5740-2926-77cd21441f27 09/03/2021 19:09:00	Wait-Tools	Finished execution 09/03/2021 19:09:00	Wait-Tools Administrators: &#34;lab\\testy&#34; Attempting to add administrator accounts... 09/03/2021 19:09:10	Invoke-VMScript	5222b516-ae2c-5740-2926-77cd21441f27 09/03/2021 19:09:10	Invoke-VMScript	Finished execution 09/03/2021 19:09:10	Invoke-VMScript Successfully added [&#34;lab\\testy&#34;] to Administrators group. Attempting to extend system volume... 09/03/2021 19:09:27	Invoke-VMScript	5222b516-ae2c-5740-2926-77cd21441f27 09/03/2021 19:09:27	Invoke-VMScript	Finished execution 09/03/2021 19:09:27	Invoke-VMScript Successfully extended system partition. Attempting to enable remote access (RDP, WMI, File and Printer Sharing, PSRemoting)... 09/03/2021 19:09:49	Invoke-VMScript	5222b516-ae2c-5740-2926-77cd21441f27 09/03/2021 19:09:49	Invoke-VMScript	Finished execution 09/03/2021 19:09:49	Invoke-VMScript Successfully enabled remote access. Creating a scheduled task to apply updates... 09/03/2021 19:10:12	Invoke-VMScript	5222b516-ae2c-5740-2926-77cd21441f27 09/03/2021 19:10:12	Invoke-VMScript	Finished execution 09/03/2021 19:10:12	Invoke-VMScript Created task: TaskPath TaskName State -------- -------- ----- \\ Initial_Updates Ready \\ Initial_Updates Ready So it claims to have successfully updated the VM tools, added lab\\testy to the local Administrators group, extended the C: volume to fill the 65GB virtual disk, added firewall rules to permit remote access, and created a scheduled task to apply updates. I can open a console session to the VM to spot-check the results. Yep, testy is an admin now!\nAnd C: fills the disk!\nWrap-up This is really just the start of what I've been able to do in-guest leveraging Invoke-VmScript from an ABX action. I've got a slightly-larger version of this script↗ which also performs similar actions in Linux guests as well. And I've also cobbled together ABX solutions for generating randomized passwords for local accounts and storing them in an organization's password management solution. I would like to get around to documenting those here in the future... we'll see.\nIn any case, hopefully this information might help someone else to get started down this path. I'd love to see whatever enhancements you are able to come up with!\n",description:"",url:"https://runtimeterror.dev/run-scripts-in-guest-os-with-vra-abx-actions/"},"https://runtimeterror.dev/notes-on-vra-ha-with-nsx-alb/":{title:"Notes on vRA HA with NSX-ALB",tags:["nsx","cluster","vra","availability","networking"],content:`This is going to be a pretty quick recap of the steps I recently took to convert a single-node instance of vRealize Automation 8.4.2 into a 3-node High-Availability vRA cluster behind a standalone NSX Advanced Load Balancer (without NSX being deployed in the environment). No screenshots or specific details since I ran through this in the lab at work and didn't capture anything along the way, and my poor NUC homelab struggles enough to run a single instance of memory-hogging vRA.
Getting started with NSX-ALB I found a lot of information on how to use NSX-ALB as a component of a broader NSX-equipped environment, but not a lot of detail on how to use the ALB without NSX - until I found Rudi Martinsen's blog on the subject↗. That turned out to be a great reference for the ALB configuration so be sure to check it out if you need more details than what I provide in this section.
Download NSX-ALB is/was formerly known as the Avi Vantage Controller, and downloads are available here↗. You'll need to log in with your VMware Customer Connect account to access the download, and then grab the latest VMware Controller OVA. Be sure to make a note of the default password listed on the right-hand side since you'll need that to log in post-deployment.
Deploy It's an OVA, so deploy it like an OVA. When you get to the &quot;Customize template&quot; stage, drop in valid data for the Management Interface IP Address, Management Interface Subnet Mask, and Default Gateway fields but leave everything else blank. Click on through to the end and watch the thing deploy. Once the deployment completes, power on the new VM and wait a little bit for it to configure itself and get ready for operation.
Configure NSX-ALB Point a browser to the NSX-ALB appliance's IP and log in as admin using the password you copied from the download page (I told you it would come in handy!). Once you're in, you'll be prompted to establish a passphrase (for backups and such) and provide DNS Resolver(s) and the DNS search domain. Set the SMTP option to &quot;None&quot; and leave the other options as the defaults.
I'd then recommend clicking the little Avi icon at the top right of the screen and using the My Account button to change the admin password to something that's not posted out on the internet. You know, for reasons.
Cloud Go to Infrastructure &gt; Clouds and click the pencil icon for Default-Cloud, then set the Cloud Infrastructure Type to &quot;VMware&quot;. Input the credentials needed for connecting to your vCenter, and make sure the account has &quot;Write&quot; access so it can create the Service Engine VMs and whatnot.
Click over to the Data Center tab and point it to the virtual data center used in your vCenter. On the Network tab, select the network that will be used for management traffic. Also configure a subnet (in CIDR notation) and gateway, and add a small static IP address pool that can be assigned to the Service Engine nodes (I used something like 192.168.1.120-192.168.1.126).
Networks Once thats sorted, navigate to Infrastructure &gt; Cloud Resources &gt; Networks. You should already see the networks which were imported from vCenter; find the one you'll use for servers (like your pending vRA cluster) and click the pencil icon to edit it. Then click the Add Subnet button, define the subnet in CIDR format, and add a static IP pool as well. Also go ahead and select the Default-Group as the Template Service Engine Group.
Back on the Networks list, you should now see both your management and server network defined with IP pools for each.
IPAM profile Now go to Templates &gt; Profiles &gt; IPAM/DNS Profiles, click the Create button at the top right, and select IPAM Profile. Give it a name, set Type to Avi Vantage IPAM, pick the appropriate Cloud, and then also select the Networks for which you created the IP pools earlier.
Then go back to Infastructure &gt; Clouds, edit the Cloud, and select the IPAM Profile you just created.
Service Engine Group Navigate to Infrastructure &gt; Cloud Resources &gt; Service Engine Group and edit the Default-Group. I left everything on the Basic Settings tab at the defaults. On the Advanced tab, I specified which vSphere cluster the Service Engines should be deployed to. And I left everything else with the default settings.
SSL Certificate Hop over to Templates &gt; Security &gt; SSL/TLS Certificates and click Create &gt; Application Certificate. Give the new cert a name and change the Type to CSR to generate a new signing request. Enter the Common Name you're going to want to use for the load balancer VIP (something like vra, perhaps?) and all the usual cert fields. Use the Subject Alternate Name (SAN) section at the bottom to add all the other components, like the individual vRA cluster members by both hostname and FQDN. I went ahead and included those IPs as well for good measure.
Name vra.domain.local vra01.domain.local vra01 192.168.1.41 vra02.domain.local vra02 192.168.1.42 vra03.domain.local vra03 192.168.1.43 Click Save.
Click Create again, but this time select Root/Intermediate CA Certificate and upload/paste your CA's cert so it can be trusted. Save your work.
Back at the cert list, find your new application cert and click the pencil icon to edit it. Copy the Certificate Signing Request field and go get it signed by your CA. Be sure to grab the certificate chain (base64-encoded) as well if you can. Come back and paste in / upload your shiny new CA-signed certificate file.
Virtual Service Now it's finally time to create the Virtual Service that will function as the load balancer front-end. Pop over to Applications &gt; Virtual Services and click Create Virtual Service &gt; Basic Setup. Give it a name and set the Application Type to HTTPS, which will automatically set the port and bind a default self-signed certificate.
Click on the Certificate field and select the new cert you created above. Be sure to remove the default cert.
Tick the box to auto-allocate the IP(s), and select the appropriate network and subnet.
Add your vRA servers (current and future) by their IP addresses (192.168.1.41, 192.168.1.42, 192.168.1.43), and then click Save.
Now that the Virtual Service is created, make a note of the IP address assigned to the service and go add that to your DNS so that the name will resolve.
Now do vRA Log into LifeCycle Manager in a new browser tab/window. Make sure that you've mapped an Install product binary for your current version of vRA; the upgrade binary that you probably used to do your last update won't cut it. It's probably also a good idea to go make a snapshot of your vRA and IDM instances just in case.
Adding new certificate In LCM, go to Locker &gt; Certificates and select the option to Import. Switch back to the NSX-ALB tab and go to Templates &gt; Security &gt; SSL/TLS Certificates. Click the little down-arrow-in-a-circle &quot;Export&quot; icon next to the application certificate you created earlier. Copy the key section and paste that into LCM. Then open the file containing the certificate chain you got from your CA, copy its contents, and paste it into LCM as well. Do not try to upload a certificate file directly to LCM; that will fail unless the file includes both the cert and the private key and that's silly.
Once the cert is successfully imported, go to the Lifecycle Operations component of LCM and navigate to the environment containing your vRA instance. Select the vRA product, hit the three-dot menu, and use the Replace Certificate option to replace the old and busted cert with the new HA-ready one. It will take a little bit for this to get applied. Don't move on until vRA services are back up.
Scale out vRA Still on the vRA product page, click on the + Add Components button.
On the Infrastructure page, tell LCM where to put the new VRA VMs.
On the Network page, tell it which network configuration to use.
On the Components page, scroll down a bit and click on (+) &gt; vRealize Automation Secondary Node - twice. That will reveal a new section labeled Cluster Virtual IP. Put in the FQDN you configured for the Virtual Service, and tick the box to terminate SSL at the load balancer. Then scroll on down and enter the details for the additional vRA nodes, making sure that the IP addresses match the servers you added to the Virtual Service configuration and that the FQDNs match what's in the SSL cert.
Click on through to do the precheck and ultimately kick off the deployment. It'll take a while, but you'll eventually be able to connect to the NSX-ALB at vra.domain.local and get passed along to one of your three cluster nodes.
Have fun!
`,description:"",url:"https://runtimeterror.dev/notes-on-vra-ha-with-nsx-alb/"},"https://runtimeterror.dev/free-serverless-url-shortener-google-cloud-run/":{title:"Free serverless URL shortener on Google Cloud Run",tags:["gcp","cloud","serverless"],content:` Intro I've been using short.io↗ with a custom domain to keep track of and share messy links for a few months now. That approach has worked very well, but it's also seriously overkill for my needs. I don't need (nor want) tracking metrics to know anything about when those links get clicked, and short.io doesn't provide an easy way to turn that off. I was casually looking for a lighter self-hosted alternative today when I stumbled upon a serverless alternative: sheets-url-shortener↗. This uses Google Cloud Run↗ to run an ultralight application container which receives an incoming web request, looks for the path in a Google Sheet, and redirects the client to the appropriate URL. It supports connecting with a custom domain, and should run happily within the Cloud Run Free Tier limits↗.
The Github instructions were pretty straight-forward but I did have to fumble through a few additional steps to get everything up and running. Here we go:
Shortcut mapping Since the setup uses a simple Google Sheets document to map the shortcuts to the original long-form URLs, I started by going to https://sheets.new↗ to create a new Sheet. I then just copied in the shortcuts and URLs I was already using in short.io. By the way, I learned on a previous attempt that this solution only works with lowercase shortcuts so I made sure to convert my MixedCase ones as I went. I then made a note of the Sheet ID from the URL; that's the bit that looks like 1SMeoyesCaGHRlYdGj9VyqD-qhXtab1jrcgHZ0irvNDs. That will be needed later on.
Create a new GCP project I created a new project in my GCP account by going to https://console.cloud.google.com/projectcreate↗ and entering a descriptive name. Deploy to GCP At this point, I was ready to actually kick off the deployment. Ahmet made this part exceptionally easy: just hit the Run on Google Cloud button from the Github project page↗. That opens up a Google Cloud Shell instance which prompts for authorization before it starts the deployment script. The script prompted me to select a project and a region, and then asked for the Sheet ID that I copied earlier. Grant access to the Sheet In order for the Cloud Run service to be able to see the URL mappings in the Sheet I needed to share the Sheet with the service account. That service account is found by going to https://console.cloud.google.com/run↗, clicking on the new sheets-url-shortener service, and then viewing the Permissions tab. I'm interested in the one that's ############[email protected]. I then went back to the Sheet, hit the big Share button at the top, and shared the Sheet to the service account with Viewer access. Quick test Back in GCP land, the details page for the sheets-url-shortener Cloud Run service shows a gross-looking URL near the top: https://sheets-url-shortener-vrw7x6wdzq-uc.a.run.app. That doesn't do much for shortening my links, but it'll do just fine for a quick test. First, I pointed my browser straight to that listed URL: This at least tells me that the web server portion is working. Now to see if I can redirect to my project car posts on Polywork↗: Hmm, not quite. Luckily the error tells me exactly what I need to do...
Enable Sheets API I just needed to visit https://console.developers.google.com/apis/api/sheets.googleapis.com/overview?project=############ to enable the Google Sheets API. Once that's done, I can try my redirect again - and, after a brief moment, it successfully sends me on to Polywork! Link custom domain The whole point of this project is to shorten URLs, but I haven't done that yet. I'll want to link in my go.bowdre.net domain to use that in place of the rather unwieldy https://sheets-url-shortener-somestring-uc.a.run.app. I do that by going back to the Cloud Run console↗ and selecting the option at the top to Manage Custom Domains. I can then use the Add Mapping button, select my sheets-url-shortener service, choose one of my verified domains (which I think are already verified since they're registered through Google Domains with the same account), and then specify the desired subdomain. The wizard then tells me exactly what record I need to create/update with my domain host: It took a while for the domain mapping to go live once I've updated the record. Final tests Once it did finally update, I was able to hit https://go.bowdre.net to get the error/landing page, complete with a valid SSL cert: Outro I'm very pleased with how this quick little project turned out. Managing my shortened links with a Google Sheet is quite convenient, and I really like the complete lack of tracking or analytics. Plus I'm a sucker for an excuse to use a cloud technology I haven't played a lot with yet.
And now I can hand out handy-dandy short links!
Link Description go.bowdre.net/coso↗ Follow me on CounterSocial go.bowdre.net/conedoge↗ 2014 Subaru BRZ autocross videos go.bowdre.net/cooltechshit↗ A collection of cool tech shit (references and resources) go.bowdre.net/stuffiuse↗ Things that I use (and think you should use too) go.bowdre.net/shorterer↗ This post! `,description:"",url:"https://runtimeterror.dev/free-serverless-url-shortener-google-cloud-run/"},"https://runtimeterror.dev/creating-static-records-in-microsoft-dns-from-vrealize-automation/":{title:"Creating static records in Microsoft DNS from vRealize Automation",tags:["vmware","vra","vro","javascript","powershell","automation"],content:`One of the requirements for my vRA deployments is the ability to automatically create a static A records for non-domain-joined systems so that users can connect without needing to know the IP address. The organization uses Microsoft DNS servers to provide resolution on the internal domain. At first glance, this shouldn't be too much of a problem: vRealize Orchestrator 8.x can run PowerShell scripts, and PowerShell can use the Add-DnsServerResourceRecord cmdlet↗ to create the needed records.
Not so fast, though. That cmdlet is provided through the Remote Server Administration Tools↗ package so it won't be available within the limited PowerShell environment inside of vRO. A workaround might be to add a Windows machine to vRO as a remote PowerShell host, but then you run into issues of credential hopping↗.
I eventually came across this blog post↗ which described adding a Windows machine as a remote SSH host instead. I'll deviate a bit from the described configuration, but that post did at least get me pointed in the right direction. This approach would get around the complicated authentication-tunneling business while still being pretty easy to set up. So let's go!
Preparing the SSH host I deployed a Windows Server 2019 Core VM to use as my SSH host, and I joined it to my AD domain as win02.lab.bowdre.net. Once that's taken care of, I need to install the RSAT DNS tools so that I can use the Add-DnsServerResourceRecord and associated cmdlets. I can do that through PowerShell like so:
# Install RSAT DNS tools [tl! .nocopy] Add-WindowsCapability -online -name Rsat.Dns.Tools~~~~0.0.1.0 # [tl! .cmd_pwsh] Instead of using a third-party SSH server, I'll use the OpenSSH Server that's already available in Windows 10 (1809+) and Server 2019:
# Install OpenSSH Server [tl! .nocopy] Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 # [tl! .cmd_pwsh] I'll also want to set it so that the default shell upon SSH login is PowerShell (rather than the standard Command Prompt) so that I can have easy access to those DNS cmdlets:
# Set PowerShell as the default Shell (for access to DNS cmdlets) # [tl! .nocopy] New-ItemProperty -Path &#34;HKLM:\\SOFTWARE\\OpenSSH&#34; -Name DefaultShell \` # [tl! .cmd_pwsh:2 -Value &#34;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe&#34; \` -PropertyType String -Force I'll be using my lab\\vra service account for managing DNS. I've already given it the appropriate rights on the DNS server, but I'll also add it to the Administrators group on my SSH host:
# Add the service account as a local administrator # [tl! .nocopy] Add-LocalGroupMember -Group Administrators -Member &#34;lab\\vra&#34; # [tl! .cmd_pwsh] And I'll modify the OpenSSH configuration so that only members of that Administrators group are permitted to log into the server via SSH:
# Restrict SSH access to members in the local Administrators group [tl! .nocopy] (Get-Content &#34;C:\\ProgramData\\ssh\\sshd_config&#34;) -Replace &#34;# Authentication:&#34;, \` &#34;$&amp;\`nAllowGroups Administrators&#34; | Set-Content &#34;C:\\ProgramData\\ssh\\sshd_config&#34; # [tl! .cmd_pwsh:-1] Finally, I'll start the sshd service and set it to start up automatically:
# Start service and set it to automatic [tl! .nocopy] Set-Service -Name sshd -StartupType Automatic -Status Running # [tl! .cmd_pwsh] A quick test At this point, I can log in to the server via SSH and confirm that I can create and delete records in my DNS zone:
ssh [email protected] # [tl! .cmd_pwsh] [email protected]\`&#39;s password: # [tl! .nocopy:3] Windows PowerShell Copyright (C) Microsoft Corporation. All rights reserved. Add-DnsServerResourceRecordA -ComputerName win01.lab.bowdre.net \` -Name testy -ZoneName lab.bowdre.net -AllowUpdateAny -IPv4Address 172.16.99.99 # [tl! .cmd_pwsh:-1] nslookup testy # [tl! .cmd_pwsh] Server: win01.lab.bowdre.net # [tl! .nocopy:start] Address: 192.168.1.5 Name: testy.lab.bowdre.net Address: 172.16.99.99 # [tl! .nocopy:end] Remove-DnsServerResourceRecord -ComputerName win01.lab.bowdre.net \` -Name testy -ZoneName lab.bowdre.net -RRType A -Force # [tl! .cmd_pwsh:-1] nslookup testy # [tl! .cmd_pwsh] Server: win01.lab.bowdre.net # [tl! .nocopy:3] Address: 192.168.1.5 *** win01.lab.bowdre.net can&#39;t find testy: Non-existent domain Cool! Now I just need to do that same thing, but from vRealize Orchestrator. First, though, I'll update the template so the requester can choose whether or not a static record will get created.
Template changes Cloud Template Similar to the template changes I made for optionally joining deployed servers to the Active Directory domain, I'll just be adding a simple boolean checkbox to the inputs section of the template in Cloud Assembly:
formatVersion: 1 inputs: [...] staticDns: title: Create static DNS record type: boolean default: false [...] Unlike the AD piece, in the resources section I'll just bind a custom property called staticDns to the input with the same name:
resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine properties: [...] staticDns: &#39;\${input.staticDns}&#39; [...] So here's the complete cloud template that I've been working on:
# torchlight! {&#34;lineNumbers&#34;: true} formatVersion: 1 # [tl! focus:1] inputs: site: # [tl! collapse:5] type: string title: Site enum: - BOW - DRE image: # [tl! collapse:6] type: string title: Operating System oneOf: - title: Windows Server 2019 const: ws2019 default: ws2019 size: # [tl! collapse:10] title: Resource Size type: string oneOf: - title: &#39;Micro [1vCPU|1GB]&#39; const: micro - title: &#39;Tiny [1vCPU|2GB]&#39; const: tiny - title: &#39;Small [2vCPU|2GB]&#39; const: small default: small network: # [tl! collapse:2] title: Network type: string adJoin: # [tl! collapse:3] title: Join to AD domain type: boolean default: true staticDns: # [tl! highlight:3 focus:3] title: Create static DNS record type: boolean default: false environment: # [tl! collapse:10] type: string title: Environment oneOf: - title: Development const: D - title: Testing const: T - title: Production const: P default: D function: # [tl! collapse:14] type: string title: Function Code oneOf: - title: Application (APP) const: APP - title: Desktop (DSK) const: DSK - title: Network (NET) const: NET - title: Service (SVS) const: SVS - title: Testing (TST) const: TST default: TST app: # [tl! collapse:5] type: string title: Application Code minLength: 3 maxLength: 3 default: xxx description: # [tl! collapse:4] type: string title: Description description: Server function/purpose default: Testing and evaluation poc_name: # [tl! collapse:3] type: string title: Point of Contact Name default: Jack Shephard poc_email: # [tl! collapse:4] type: string title: Point of Contact Email default: [email protected] pattern: &#39;^[^\\s@]+@[^\\s@]+\\.[^\\s@]+$&#39; ticket: # [tl! collapse:3] type: string title: Ticket/Request Number default: 4815162342 resources: # [tl! focus:3] Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine properties: # [tl! collapse:start] image: &#39;\${input.image}&#39; flavor: &#39;\${input.size}&#39; site: &#39;\${input.site}&#39; environment: &#39;\${input.environment}&#39; function: &#39;\${input.function}&#39; app: &#39;\${input.app}&#39; ignoreActiveDirectory: &#39;\${!input.adJoin}&#39; activeDirectory: relativeDN: &#39;\${&#34;OU=Servers,OU=Computers,OU=&#34; + input.site + &#34;,OU=LAB&#34;}&#39; customizationSpec: &#39;\${input.adJoin ? &#34;vra-win-domain&#34; : &#34;vra-win-workgroup&#34;}&#39; # [tl! collapse:end] staticDns: &#39;\${input.staticDns}&#39; # [tl! focus highlight] dnsDomain: lab.bowdre.net # [tl! collapse:start] poc: &#39;\${input.poc_name + &#34; (&#34; + input.poc_email + &#34;)&#34;}&#39; ticket: &#39;\${input.ticket}&#39; description: &#39;\${input.description}&#39; networks: - network: &#39;\${resource.Cloud_vSphere_Network_1.id}&#39; assignment: static constraints: - tag: &#39;comp:\${to_lower(input.site)}&#39; # [tl! collapse:end] Cloud_vSphere_Network_1: type: Cloud.vSphere.Network properties: # [tl! collapse:3] networkType: existing constraints: - tag: &#39;net:\${input.network}&#39; I save the template, and then also hit the &quot;Version&quot; button to publish a new version to the catalog: Service Broker Custom Form I switch over to the Service Broker UI to update the custom form - but first I stop off at Content &amp; Policies &gt; Content Sources, select my Content Source, and hit the Save &amp; Import button to force a sync of the cloud templates. I can then move on to the Content &amp; Policies &gt; Content section, click the 3-dot menu next to my template name, and select the option to Customize Form.
I'll just drag the new Schema Element called Create static DNS record from the Request Inputs panel and on to the form canvas. I'll drop it right below the Join to AD domain field: And then I'll hit the Save button so that my efforts are preserved.
That should take care of the front-end changes. Now for the back-end stuff: I need to teach vRO how to connect to my SSH host and run the PowerShell commands, just like I tested earlier.
The vRO solution I will be adding the DNS action on to my existing &quot;VM Post-Provisioning&quot; workflow (described here, which gets triggered after the VM has been successfully deployed.
Configuration Element But first, I'm going to go to the Assets &gt; Configurations section of the Orchestrator UI and create a new Configuration Element to store variables related to the SSH host and DNS configuration. I'll call it dnsConfig and put it in my CustomProvisioning folder. And then I create the following variables:
Variable Value Type sshHost win02.lab.bowdre.net string sshUser vra string sshPass ***** secureString dnsServer [win01.lab.bowdre.net] Array/string supportedDomains [lab.bowdre.net] Array/string sshHost is my new win02 server that I'm going to connect to via SSH, and sshUser and sshPass should explain themselves. The dnsServer array will tell the script which DNS servers to try to create the record on; this will just be a single server in my lab, but I'm going to construct the script to support multiple servers in case one isn't reachable. And supported domains will be used to restrict where I'll be creating records; again, that's just a single domain in my lab, but I'm building this solution to account for the possibility where a VM might need to be deployed on a domain where I can't create a static record in this way so I want it to fail elegantly.
Here's what the new configuration element looks like: Workflow to create records I'll need to tell my workflow about the variables held in the dnsConfig Configuration Element I just created. I do that by opening the &quot;VM Post-Provisioning&quot; workflow in the vRO UI, clicking the Edit button, and then switching to the Variables tab. I create a variable for each member of dnsConfig, and enable the toggle to Bind to configuration so that I can select the corresponding item. It's important to make sure that the variable type exactly matches what's in the configuration element so that you'll be able to pick it! I repeat that for each of the remaining variables until all the members of dnsConfig are represented in the workflow: Now we're ready for the good part: inserting a new scriptable task into the workflow schema. I'll called it Create DNS Record and place it directly after the Set Notes task. For inputs, the task will take in inputProperties (Properties) as well as everything from that dnsConfig configuration element: And here's the JavaScript for the task:
// torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: Create DNS Record task // Inputs: inputProperties (Properties), dnsServers (Array/string), // sshHost (string), sshUser (string), sshPass (secureString), // supportedDomains (Array/string) // Outputs: None var staticDns = inputProperties.customProperties.staticDns; var hostname = inputProperties.resourceNames[0]; var dnsDomain = inputProperties.customProperties.dnsDomain; var ipAddress = inputProperties.addresses[0]; var created = false; // check if user requested a record to be created and if the VM&#39;s dnsDomain is in the supportedDomains array if (staticDns == &#34;true&#34; &amp;&amp; supportedDomains.indexOf(dnsDomain) &gt;= 0) { System.log(&#34;Attempting to create DNS record for &#34;+hostname+&#34;.&#34;+dnsDomain+&#34; at &#34;+ipAddress+&#34;...&#34;) // create the ssh session to the intermediary host var sshSession = new SSHSession(sshHost, sshUser); System.debug(&#34;Connecting to &#34;+sshHost+&#34;...&#34;) sshSession.connectWithPassword(sshPass) // loop through DNS servers in case the first one doesn&#39;t respond for each (var dnsServer in dnsServers) { if (created == false) { System.debug(&#34;Using DNS Server &#34;+dnsServer+&#34;...&#34;) // insert the PowerShell command to create A record var sshCommand = &#39;Add-DnsServerResourceRecordA -ComputerName &#39;+dnsServer+&#39; -ZoneName &#39;+dnsDomain+&#39; -Name &#39;+hostname+&#39; -AllowUpdateAny -IPv4Address &#39;+ipAddress; System.debug(&#34;sshCommand: &#34;+sshCommand) // run the command and check the result sshSession.executeCommand(sshCommand, true) var result = sshSession.exitCode; if (result == 0) { System.log(&#34;Successfully created DNS record!&#34;) // make a note that it was successful so we don&#39;t repeat this unnecessarily created = true; } } } sshSession.disconnect() if (created == false) { System.warn(&#34;Error! Unable to create DNS record.&#34;) } } else { System.log(&#34;Not trying to do DNS&#34;) } Now I can just save the workflow, and I'm done! - with this part. Of course, being able to create a static record is just one half of the fight; I also need to make sure that vRA will be able to clean up these static records when a deployment gets deleted.
Workflow to delete records I haven't previously created any workflows that fire on deployment removal, so I'll create a new one and call it VM Deprovisioning: This workflow only needs a single input (inputProperties (Properties)) so it can receive information about the deployment from vRA: I'll also need to bind in the variables from the dnsConfig element as before: The schema will include a single scriptable task: And it's going to be pretty damn similar to the other one:
// torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: Delete DNS Record task // Inputs: inputProperties (Properties), dnsServers (Array/string), // sshHost (string), sshUser (string), sshPass (secureString), // supportedDomains (Array/string) // Outputs: None var staticDns = inputProperties.customProperties.staticDns; var hostname = inputProperties.resourceNames[0]; var dnsDomain = inputProperties.customProperties.dnsDomain; var ipAddress = inputProperties.addresses[0]; var deleted = false; // check if user requested a record to be created and if the VM&#39;s dnsDomain is in the supportedDomains array if (staticDns == &#34;true&#34; &amp;&amp; supportedDomains.indexOf(dnsDomain) &gt;= 0) { System.log(&#34;Attempting to remove DNS record for &#34;+hostname+&#34;.&#34;+dnsDomain+&#34; at &#34;+ipAddress+&#34;...&#34;) // create the ssh session to the intermediary host var sshSession = new SSHSession(sshHost, sshUser); System.debug(&#34;Connecting to &#34;+sshHost+&#34;...&#34;) sshSession.connectWithPassword(sshPass) // loop through DNS servers in case the first one doesn&#39;t respond for each (var dnsServer in dnsServers) { if (deleted == false) { System.debug(&#34;Using DNS Server &#34;+dnsServer+&#34;...&#34;) // insert the PowerShell command to delete A record var sshCommand = &#39;Remove-DnsServerResourceRecord -ComputerName &#39;+dnsServer+&#39; -ZoneName &#39;+dnsDomain+&#39; -RRType A -Name &#39;+hostname+&#39; -Force&#39;; System.debug(&#34;sshCommand: &#34;+sshCommand) // run the command and check the result sshSession.executeCommand(sshCommand, true) var result = sshSession.exitCode; if (result == 0) { System.log(&#34;Successfully deleted DNS record!&#34;) // make a note that it was successful so we don&#39;t repeat this unnecessarily deleted = true; } } } sshSession.disconnect() if (deleted == false) { System.warn(&#34;Error! Unable to delete DNS record.&#34;) } } else { System.log(&#34;No need to clean up DNS.&#34;) } Since this is a new workflow, I'll also need to head back to Cloud Assembly &gt; Extensibility &gt; Subscriptions and add a new subscription to call it when a deployment gets deleted. I'll call it &quot;VM Deprovisioning&quot;, assign it to the &quot;Compute Post Removal&quot; Event Topic, and link it to my new &quot;VM Deprovisioning&quot; workflow. I could use the Condition option to filter this only for deployments which had a static DNS record created, but I'll later want to use this same workflow for other cleanup tasks so I'll just save it as is for now. Testing Now I can (finally) fire off a quick deployment to see if all this mess actually works: Once the deployment completes, I go back into vRO, find the most recent item in the Workflow Runs view, and click over to the Logs tab to see how I did: And I can run a quick query to make sure that name actually resolves:
dig +short bow-ttst-xxx023.lab.bowdre.net A # [tl! .cmd] 172.16.30.10 # [tl! .nocopy] It works!
Now to test the cleanup. For that, I'll head back to Service Broker, navigate to the Deployments tab, find my deployment, click the little three-dot menu button, and select the Delete option: Again, I'll check the Workflow Runs in vRO to see that the deprovisioning task completed successfully: And I can dig a little more to make sure the name doesn't resolve anymore:
dig +short bow-ttst-xxx023.lab.bowdre.net A # [tl! .cmd] It really works!
Conclusion So there you have it - how I've got vRA/vRO able to create and delete static DNS records as needed, using a Windows SSH host as an intermediary. Cool, right?
`,description:"",url:"https://runtimeterror.dev/creating-static-records-in-microsoft-dns-from-vrealize-automation/"},"https://runtimeterror.dev/recreating-hashnode-series-categories-in-jekyll-on-github-pages/":{title:"Recreating Hashnode Series (Categories) in Jekyll on GitHub Pages",tags:["meta","jekyll"],content:`I recently migrated this site from Hashnode to GitHub Pages, and I'm really getting into the flexibility and control that managing the content through Jekyll provides. So, naturally, after finalizing the move I got to work recreating Hashnode's &quot;Series&quot; feature, which lets you group posts together and highlight them as a collection. One of the things I liked about the Series setup was that I could control the order of the collected posts: my posts about building out the vRA environment in my homelab are probably best consumed in chronological order (oldest to newest) since the newer posts build upon the groundwork laid by the older ones, while posts about my other one-off projects could really be enjoyed in any order.
I quickly realized that if I were hosting this pretty much anywhere other than GitHub Pages I could simply leverage the jekyll-archives↗ plugin to manage this for me - but, alas, that's not one of the plugins supported by the platform↗. I needed to come up with my own solution, and being still quite new to Jekyll (and this whole website design thing in general) it took me a bit of fumbling to get it right.
Reviewing the theme-provided option The Jekyll theme I'm using (Minimal Mistakes↗) comes with built-in support↗ for a category archive page, which (like the tags page) displays all the categorized posts on a single page. Links at the top will let you jump to an appropriate anchor to start viewing the selected category, but it's not really an elegant way to display a single category. It's a start, though, so I took a few minutes to check out how it's being generated. The category archive page lives at _pages/category-archive.md↗:
// torchlight! {&#34;lineNumbers&#34;: true} --- title: &#34;Posts by Category&#34; layout: categories permalink: /categories/ author_profile: true --- The title indicates what's going to be written in bold text at the top of the page, the permalink says that it will be accessible at http://localhost/categories/, and the nice little author_profile sidebar will appear on the left.
This page then calls the categories layout, which is defined in _layouts/categories.html↗:
# torchlight! {&#34;lineNumbers&#34;: true} --- layout: archive --- {{ content }} {% assign categories_max = 0 %} {% for category in site.categories %} {% if category[1].size &gt; categories_max %} {% assign categories_max = category[1].size %} {% endif %} {% endfor %} &lt;ul class=&#34;taxonomy__index&#34;&gt; {% for i in (1..categories_max) reversed %} {% for category in site.categories %} {% if category[1].size == i %} &lt;li&gt; &lt;a href=&#34;#{{ category[0] | slugify }}&#34;&gt; &lt;strong&gt;{{ category[0] }}&lt;/strong&gt; &lt;span class=&#34;taxonomy__count&#34;&gt;{{ i }}&lt;/span&gt; &lt;/a&gt; &lt;/li&gt; {% endif %} {% endfor %} {% endfor %} &lt;/ul&gt; {% assign entries_layout = page.entries_layout | default: &#39;list&#39; %} {% for i in (1..categories_max) reversed %} {% for category in site.categories %} {% if category[1].size == i %} &lt;section id=&#34;{{ category[0] | slugify | downcase }}&#34; class=&#34;taxonomy__section&#34;&gt; &lt;h2 class=&#34;archive__subtitle&#34;&gt;{{ category[0] }}&lt;/h2&gt; &lt;div class=&#34;entries-{{ entries_layout }}&#34;&gt; {% for post in category.last %} {% include archive-single.html type=entries_layout %} {% endfor %} &lt;/div&gt; &lt;a href=&#34;#page-title&#34; class=&#34;back-to-top&#34;&gt;{{ site.data.ui-text[site.locale].back_to_top | default: &#39;Back to Top&#39; }} &amp;uarr;&lt;/a&gt; &lt;/section&gt; {% endif %} {% endfor %} {% endfor %}{% endraw %} I wanted my solution to preserve the formatting that's used by the theme elsewhere on this site so this bit is going to be my base. The big change I'll make is that instead of enumerating all of the categories on one page, I'll have to create a new static page for each of the categories I'll want to feature. And each of those pages will refer to a new layout to determine what will actually appear on the page.
Defining a new layout I create a new file called _layouts/series.html which will define how these new series pages get rendered. It starts out just like the default categories.html one:
# torchlight! {&#34;lineNumbers&#34;: true} --- layout: archive --- {{ content }} That {{ content }} block will let me define text to appear above the list of articles - very handy. Much of the original categories.html code has to do with iterating through the list of categories. I won't need that, though, so I'll jump straight to setting what layout the entries on this page will use:
{% assign entries_layout = page.entries_layout | default: &#39;list&#39; %} I'll be including two custom variables in the Front Matter↗ for my category pages: tag to specify what category to filter on, and sort_order which will be set to reverse if I want the older posts up top. I'll be able to access these in the layout as page.tag and page.sort_order, respectively. So I'll go ahead and grab all the posts which are categorized with page.tag, and then decide whether the posts will get sorted normally or in reverse:
# torchlight! {&#34;lineNumbers&#34;: true} {% assign posts = site.categories[page.tag] %} {% if page.sort_order == &#39;reverse&#39; %} {% assign posts = posts | reverse %} {% endif %} And then I'll loop through each post (in either normal or reverse order) and insert them into the rendered page:
# torchlight! {&#34;lineNumbers&#34;: true} &lt;div class=&#34;entries-{{ entries_layout }}&#34;&gt; {% for post in posts %} {% include archive-single.html type=entries_layout %} {% endfor %} &lt;/div&gt; Putting it all together now, here's my new _layouts/series.html file:
# torchlight! {&#34;lineNumbers&#34;: true} --- layout: archive --- {{ content }} {% assign entries_layout = page.entries_layout | default: &#39;list&#39; %} {% assign posts = site.categories[page.tag] %} {% if page.sort_order == &#39;reverse&#39; %} {% assign posts = posts | reverse %} {% endif %} &lt;div class=&#34;entries-{{ entries_layout }}&#34;&gt; {% for post in posts %} {% include archive-single.html type=entries_layout %} {% endfor %} &lt;/div&gt;{% endraw %} Series pages Since I can't use a plugin to automatically generate pages for each series, I'll have to do it manually. Fortunately this is pretty easy, and I've got a limited number of categories/series to worry about. I started by making a new _pages/series-vra8.md and setting it up thusly:
// torchlight! {&#34;lineNumbers&#34;: true} --- title: &#34;Adventures in vRealize Automation 8&#34; layout: series permalink: &#34;/categories/vmware&#34; tag: vRA8 sort_order: reverse author_profile: true header: teaser: assets/images/posts-2020/RtMljqM9x.png --- *Follow along as I create a flexible VMware vRealize Automation 8 environment for provisioning virtual machines - all from the comfort of my Intel NUC homelab.* You can see that this page is referencing the series layout I just created, and it's going to live at http://localhost/categories/vmware - precisely where this series was on Hashnode. I've tagged it with the category I want to feature on this page, and specified that the posts will be sorted in reverse order so that anyone reading through the series will start at the beginning (I hear it's a very good place to start). I also added a teaser image which will be displayed when I link to the series from elsewhere. And I included a quick little italicized blurb to tell readers what the series is about.
Check it out here: The other series pages will be basically the same, just without the reverse sort directive. Here's _pages/series-tips.md:
// torchlight! {&#34;lineNumbers&#34;: true} --- title: &#34;Tips &amp; Tricks&#34; layout: series permalink: &#34;/series/tips&#34; tag: Tips author_profile: true header: teaser: assets/images/posts-2020/kJ_l7gPD2.png --- *Useful tips and tricks I&#39;ve stumbled upon.* Changing the category permalink Just in case someone wants to look at all the post series in one place, I'll be keeping the existing category archive page around, but I'll want it to be found at /series/ instead of /categories/. I'll start with going into the _config.yml file and changing the category_archive path:
# torchlight! {&#34;lineNumbers&#34;: true} category_archive: type: liquid # path: /categories/ path: /series/ tag_archive: type: liquid path: /tags/ I'll also rename _pages/category-archive.md to _pages/series-archive.md and update its title and permalink:
// torchlight! {&#34;lineNumbers&#34;: true} --- title: &#34;Posts by Series&#34; layout: categories permalink: /series/ author_profile: true --- Fixing category links in posts The bottom of each post has a section which lists the tags and categories to which it belongs. Right now, those are still pointing to the category archive page (/series/#vra8) instead of the series feature pages I created (/categories/vmware). That works but I'd rather it reference the fancy new pages I created. Tracking down where to make that change was a bit of a journey.
I started with the _layouts/single.html↗ file which is the layout I'm using for individual posts. This bit near the end gave me the clue I needed:
# torchlight! {&#34;lineNumbers&#34;: true} &lt;footer class=&#34;page__meta&#34;&gt; {% if site.data.ui-text[site.locale].meta_label %} &lt;h4 class=&#34;page__meta-title&#34;&gt;{{ site.data.ui-text[site.locale].meta_label }}&lt;/h4&gt; {% endif %} {% include page__taxonomy.html %} {% include page__date.html %} &lt;/footer&gt; It looks like page__taxonomy.html↗ is being used to display the tags and categories, so I then went to that file in the _include directory:
# torchlight! {&#34;lineNumbers&#34;: true} {% if site.tag_archive.type and page.tags[0] %} {% include tag-list.html %} {% endif %} {% if site.category_archive.type and page.categories[0] %} {% include category-list.html %} {% endif %} Okay, it looks like _include/category-list.html↗ is what I actually want. Here's that file:
# torchlight! {&#34;lineNumbers&#34;: true} {% case site.category_archive.type %} {% when &#34;liquid&#34; %} {% assign path_type = &#34;#&#34; %} {% when &#34;jekyll-archives&#34; %} {% assign path_type = nil %} {% endcase %} {% if site.category_archive.path %} {% assign categories_sorted = page.categories | sort_natural %} &lt;p class=&#34;page__taxonomy&#34;&gt; &lt;strong&gt;&lt;i class=&#34;fas fa-fw fa-folder-open&#34; aria-hidden=&#34;true&#34;&gt;&lt;/i&gt; {{ site.data.ui-text[site.locale].categories_label | default: &#34;categories:&#34; }} &lt;/strong&gt; &lt;span itemprop=&#34;keywords&#34;&gt; {% for category_word in categories_sorted %} &lt;a href=&#34;{{ category_word | slugify | prepend: path_type | prepend: site.category_archive.path | relative_url }}&#34; class=&#34;page__taxonomy-item p-category&#34; rel=&#34;tag&#34;&gt;{{ category_word }}&lt;/a&gt;{% unless forloop.last %}&lt;span class=&#34;sep&#34;&gt;, &lt;/span&gt;{% endunless %} {% endfor %} &lt;/span&gt; &lt;/p&gt; {% endif %} I'm using the liquid archive approach since I can't use the jekyll-archives plugin, so I can see that it's setting the path_type to &quot;#&quot;. And near the bottom of the file, I can see that it's assembling the category link by slugifying the category_word, sticking the path_type in front of it, and then putting the site.category_archive.path (which I edited earlier in _config.yml) in front of that. So that's why my category links look like /series/#category. I can just edit the top of this file to statically set path_type = nil and that should clear this up in a jiffy:
# torchlight! {&#34;lineNumbers&#34;: true} {% assign path_type = nil %} {% if site.category_archive.path %} {% assign categories_sorted = page.categories | sort_natural %} [...] To sell the series illusion even further, I can pop into _data/ui-text.yml↗ to update the string used for categories_label:
# torchlight! {&#34;lineNumbers&#34;: true} meta_label : tags_label : &#34;Tags:&#34; categories_label : &#34;Series:&#34; date_label : &#34;Updated:&#34; comments_label : &#34;Leave a comment&#34; Much better!
Updating the navigation header And, finally, I'll want to update the navigation links at the top of each page to help visitors find my new featured series pages. For that, I can just edit _data/navigation.yml with links to my new pages:
# torchlight! {&#34;lineNumbers&#34;: true} main: - title: &#34;vRealize Automation 8&#34; url: /categories/vmware - title: &#34;Projects&#34; url: /categories/self-hosting - title: &#34;Code&#34; url: /series/code - title: &#34;Tips &amp; Tricks&#34; url: /series/tips - title: &#34;Tags&#34; url: /tags/ - title: &#34;All Posts&#34; url: /posts/ All done! I set out to recreate the series setup that I had over at Hashnode, and I think I've accomplished that. More importantly, I've learned quite a bit more about how Jekyll works, and I'm already plotting further tweaks. For now, though, I think this is ready for a git push!
`,description:"",url:"https://runtimeterror.dev/recreating-hashnode-series-categories-in-jekyll-on-github-pages/"},"https://runtimeterror.dev/joining-vms-to-active-directory-in-site-specific-ous-with-vra8/":{title:"Joining VMs to Active Directory in site-specific OUs with vRA8",tags:["vmware","vra","abx","activedirectory","automation","windows"],content:"Connecting a deployed Windows VM to an Active Directory domain is pretty easy; just apply an appropriately-configured customization spec↗ and vCenter will take care of it for you. Of course, you'll likely then need to move the newly-created computer object to the correct Organizational Unit so that it gets all the right policies and such.\nFortunately, vRA 8 supports adding an Active Directory integration to handle staging computer objects in a designated OU. And vRA 8.3 even introduced the ability↗ to let blueprints override the relative DN path. That will be helpful in my case since I'll want the servers to be placed in different OUs depending on which site they get deployed to:\nSite OU BOW lab.bowdre.net/LAB/BOW/Computers/Servers DRE lab.bowre.net/LAB/DRE/Computers/Servers I didn't find a lot of documentation on how make this work, though, so here's how I've implemented it in my lab (now running vRA 8.4.2).\nAdding the AD integration First things first: connecting vRA to AD. I do this by opening the Cloud Assembly interface, navigating to Infrastructure &gt; Connections &gt; Integrations, and clicking the Add Integration button. I'm then prompted to choose the integration type so I select the Active Directory one, and then I fill in the required information: a name (Lab AD seems appropriate), my domain controller as the LDAP host (ldap://win01.lab.bowdre.net:389), credentials for an account with sufficient privileges to create and delete computer objects (lab\\vra), and finally the base DN to be used for the LDAP connection (DC=lab,DC=bowdre,DC=net).\nClicking the Validate button quickly confirms that I've entered the information correctly, and then I can click Add to save my work.\nI'll then need to associate the integration with a project by opening the new integration, navigating to the Projects tab, and clicking Add Project. Now I select the project name from the dropdown, enter a valid relative OU (OU=LAB), and enable the options to let me override the relative OU and optionally skip AD actions from the cloud template.\nCustomization specs As mentioned above, I'll leverage the customization specs in vCenter to handle the actual joining of a computer to the domain. I maintain two specs for Windows deployments (one to join the domain and one to stay on the workgroup), and I can let the vRA cloud template decide which should be applied to a given deployment.\nFirst, the workgroup spec, appropriately called vra-win-workgroup: It's about as basic as can be, including using DHCP for the network configuration (which doesn't really matter since the VM will eventually get a static IP assigned from {php}IPAM).\nvra-win-domain is basically the same, with one difference: Now to reference these specs from a cloud template...\nCloud template I want to make sure that users requesting a deployment are able to pick whether or not a system should be joined to the domain, so I'm going to add that as an input option on the template:\n# torchlight! {&#34;lineNumbers&#34;: true} inputs: [...] adJoin: title: Join to AD domain type: boolean default: true [...] This new adJoin input is a boolean so it will appear on the request form as a checkbox, and it will default to true; we'll assume that any Windows deployment should be automatically joined to AD unless this option gets unchecked.\nIn the resources section of the template, I'll set a new property called ignoreActiveDirectory to be the inverse of the adJoin input; that will tell the AD integration not to do anything if the box to join the VM to the domain is unchecked. I'll also use activeDirectory: relativeDN to insert the appropriate site code into the DN where the computer object will be created. And, finally, I'll reference the customizationSpec and use cloud template conditional syntax↗ to apply the correct spec based on whether it's a domain or workgroup deployment. (These conditionals take the pattern '${conditional-expresion ? true-value : false-value}').\n# torchlight! {&#34;lineNumbers&#34;: true} resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine properties: [...] ignoreActiveDirectory: &#39;${!input.adJoin}&#39; activeDirectory: relativeDN: &#39;${&#34;OU=Servers,OU=Computers,OU=&#34; + input.site + &#34;,OU=LAB&#34;}&#39; customizationSpec: &#39;${input.adJoin ? &#34;vra-win-domain&#34; : &#34;vra-win-workgroup&#34;}&#39; [...] Here's the current cloud template in its entirety:\n# torchlight! {&#34;lineNumbers&#34;: true} formatVersion: 1 inputs: site: type: string title: Site enum: - BOW - DRE image: type: string title: Operating System oneOf: - title: Windows Server 2019 const: ws2019 default: ws2019 size: title: Resource Size type: string oneOf: - title: &#39;Micro [1vCPU|1GB]&#39; const: micro - title: &#39;Tiny [1vCPU|2GB]&#39; const: tiny - title: &#39;Small [2vCPU|2GB]&#39; const: small default: small network: title: Network type: string adJoin: title: Join to AD domain type: boolean default: true environment: type: string title: Environment oneOf: - title: Development const: D - title: Testing const: T - title: Production const: P default: D function: type: string title: Function Code oneOf: - title: Application (APP) const: APP - title: Desktop (DSK) const: DSK - title: Network (NET) const: NET - title: Service (SVS) const: SVS - title: Testing (TST) const: TST default: TST app: type: string title: Application Code minLength: 3 maxLength: 3 default: xxx description: type: string title: Description description: Server function/purpose default: Testing and evaluation poc_name: type: string title: Point of Contact Name default: Jack Shephard poc_email: type: string title: Point of Contact Email default: [email protected] pattern: &#39;^[^\\s@]+@[^\\s@]+\\.[^\\s@]+$&#39; ticket: type: string title: Ticket/Request Number default: 4815162342 resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine properties: image: &#39;${input.image}&#39; flavor: &#39;${input.size}&#39; site: &#39;${input.site}&#39; environment: &#39;${input.environment}&#39; function: &#39;${input.function}&#39; app: &#39;${input.app}&#39; ignoreActiveDirectory: &#39;${!input.adJoin}&#39; activeDirectory: relativeDN: &#39;${&#34;OU=Servers,OU=Computers,OU=&#34; + input.site + &#34;,OU=LAB&#34;}&#39; customizationSpec: &#39;${input.adJoin ? &#34;vra-win-domain&#34; : &#34;vra-win-workgroup&#34;}&#39; dnsDomain: lab.bowdre.net poc: &#39;${input.poc_name + &#34; (&#34; + input.poc_email + &#34;)&#34;}&#39; ticket: &#39;${input.ticket}&#39; description: &#39;${input.description}&#39; networks: - network: &#39;${resource.Cloud_vSphere_Network_1.id}&#39; assignment: static constraints: - tag: &#39;comp:${to_lower(input.site)}&#39; Cloud_vSphere_Network_1: type: Cloud.vSphere.Network properties: networkType: existing constraints: - tag: &#39;net:${input.network}&#39; The last thing I need to do before leaving the Cloud Assembly interface is smash that Version button at the bottom of the cloud template editor so that the changes will be visible to Service Broker: Service Broker custom form updates ... and the first thing to do after entering the Service Broker UI is to navigate to Content Sources, click on my Lab content source, and then click Save &amp; Import to bring in the new version. I can then go to Content, click the little three-dot menu icon next to my WindowsDemo cloud template, and pick the Customize form option.\nThis bit will be pretty quick. I just need to look for the new Join to AD domain element on the left: And drag-and-drop it onto the canvas in the middle. I'll stick it directly after the Network field: I don't need to do anything else here since I'm not trying to do any fancy logic or anything, so I'll just hit Save and move on to...\nTesting Now to submit the request through Service Broker to see if this actually works: After a few minutes, I can go into Cloud Assembly and navigate to Extensibility &gt; Activity &gt; Actions Runs and look at the Integration Runs to see if the ad_machine action has completed yet. Looking good! And once the deployment completes, I can look at the VM in vCenter to see that it has registered a fully-qualified DNS name since it was automatically joined to the domain: I can also repeat the test for a VM deployed to the DRE site just to confirm that it gets correctly placed in that site's OU: And I'll fire off another deployment with the adJoin box unchecked to test that I can also skip the AD configuration completely: Conclusion Confession time: I had actually started writing this posts weeks ago. At that point, my efforts to bend the built-in AD integration to my will had been fairly unsuccessful, so I was instead working on a custom vRO workflow to accomplish the same basic thing. I circled back to try the AD integration again after upgrading the vRA environment to the latest 8.4.2 release, and found that it actually works quite well now. So I happily scrapped my ~50 lines of messy vRO JavaScript in favor of just three lines of YAML in the cloud template.\nI love it when things work out!\n",description:"",url:"https://runtimeterror.dev/joining-vms-to-active-directory-in-site-specific-ous-with-vra8/"},"https://runtimeterror.dev/virtually-potato-migrated-to-github-pages/":{title:"Virtually Potato migrated to GitHub Pages!",tags:["linux","meta","chromeos","crostini","jekyll"],content:`After a bit less than a year of hosting my little technical blog with Hashnode↗, I spent a few days migrating the content over to a new format hosted with GitHub Pages↗.
So long, Hashnode Hashnode served me well for the most part, but it was never really a great fit for me. Hashnode's focus is on developer content, and I'm not really a developer; I'm a sysadmin who occasionally develops solutions to solve my needs, but the code is never the end goal for me. As a result, I didn't spend much time in the (large and extremely active) community associated with Hashnode. It's a perfectly adequate blogging platform apart from the community, but it's really built to prop up that community aspect and I found that to be a bit limiting - particularly once Hashnode stopped letting you create tags to be used within your blog and instead only allowed you to choose from the tags↗ already popular in the community. There are hundreds of tags for different coding languages, but not any that would cover the infrastructure virtualization or other technical projects that I tend to write about.
Hello, GitHub Pages I knew about GitHub Pages, but had never seriously looked into it. Once I did, though, it seemed like a much better fit for v{:potato:} - particularly when combined with Jekyll↗ to take in Markdown posts and render them into static HTML. This approach would provide me more flexibility (and the ability to use whatever tags I want!), while still letting me easily compose my posts with Markdown. And I can now do my composition locally (and even offline!), and just do a git push to publish. Very cool!
Getting started I found that the quite-popular Minimal Mistakes↗ theme for Jekyll offers a remote theme starter↗ that can be used to quickly get things going. I just used that generator to spawn a new repository in my GitHub account (jbowdre.github.io↗). And that was it - I had a starter GitHub Pages-hosted Jekyll-powered static site with an elegant theme applied. I could even make changes to the various configuration and sample post files, point any browser to https://jbowdre.github.io, and see the results almost immediately. I got to work digging through the lengthy configuration documentation↗ to start making the site my own, like connecting with my custom domain↗ and enabling GitHub Issue-based comments↗.
Working locally A quick git clone operation was sufficient to create a local copy of my new site in my Lenovo Chromebook Duet's Linux environment. That lets me easily create and edit Markdown posts or configuration files with VS Code, commit them to the local copy of the repo, and then push them back to GitHub when I'm ready to publish the changes.
In order to view the local changes, I needed to install Jekyll locally as well. I started by installing Ruby and other prerequisites:
sudo apt-get install ruby-full build-essential zlib1g-dev # [tl! .cmd] I added the following to my ~/.zshrc file so that the gems would be installed under my home directory rather than somewhere more privileged:
export GEM_HOME=&#34;$HOME/gems&#34; # [tl! .cmd:1] export PATH=&#34;$HOME/gems/bin:$PATH&#34; And then ran source ~/.zshrc so the change would take immediate effect.
I could then install Jekyll:
gem install jekyll bundler # [tl! .cmd] I then cded to the local repo and ran bundle install to also load up the components specified in the repo's Gemfile.
And, finally, I can run this to start up the local Jekyll server instance:
bundle exec jekyll serve -l --drafts # [tl! .cmd] Configuration file: /home/jbowdre/projects/jbowdre.github.io/_config.yml # [tl! .nocopy:start] Source: /home/jbowdre/projects/jbowdre.github.io Destination: /home/jbowdre/projects/jbowdre.github.io/_site Incremental build: enabled Generating... Remote Theme: Using theme mmistakes/minimal-mistakes Jekyll Feed: Generating feed for posts GitHub Metadata: No GitHub API authentication could be found. Some fields may be missing or have incorrect data. done in 30.978 seconds. Auto-regeneration: enabled for &#39;/home/jbowdre/projects/jbowdre.github.io&#39; LiveReload address: http://0.0.0.0:35729 Server address: http://0.0.0.0:4000 Server running... press ctrl-c to stop. # [tl! .nocopy:end] And there it is! git push time Alright that's enough rambling for now. I'm very happy with this new setup, particularly with the automatically-generated Table of Contents to help folks navigate some of my longer posts. (I can't believe I was having to piece those together manually in this blog's previous iteration!)
I'll continue to make some additional tweaks in the coming weeks but for now I'll git push this post and get back to documenting my never-ending vRA project.
`,description:"",url:"https://runtimeterror.dev/virtually-potato-migrated-to-github-pages/"},"https://runtimeterror.dev/script-to-update-image-embed-links-in-markdown-files/":{title:"Script to update image embed links in Markdown files",tags:["linux","shell","regex","jekyll","meta"],content:"I'm preparing to migrate this blog thingy from Hashnode (which has been great!) to a GitHub Pages site with Jekyll↗ so that I can write posts locally and then just do a git push to publish them - and get some more practice using git in the process. Of course, I've written some admittedly-great content here and I don't want to abandon that.\nHashnode helpfully automatically backs up my posts in Markdown format to a private GitHub repo so it was easy to clone those into a local working directory, but all the embedded images were still hosted on Hashnode:\n![Clever image title](https://cdn.hashnode.com/res/hashnode/image/upload/v1600098180227/lhTnVwCO3.png) I wanted to download those images to ./assets/images/posts-2020/ within my local Jekyll working directory, and then update the *.md files to reflect the correct local path... without doing it all manually. It took a bit of trial and error to get the regex working just right (and the result is neither pretty nor elegant), but here's what I came up with:\n# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash # Hasty script to process a blog post markdown file, capture the URL for embedded images, # download the image locally, and modify the markdown file with the relative image path. # # Run it from the top level of a Jekyll blog directory for best results, and pass the # filename of the blog post you&#39;d like to process. # # Ex: ./imageMigration.sh 2021-07-19-Bulk-migrating-images-in-a-blog-post.md postfile=&#34;_posts/$1&#34; imageUrls=($(grep -o -P &#39;(?&lt;=!\\[)(?:[^\\]]+)\\]\\(([^\\)]+)&#39; $postfile | grep -o -P &#39;http.*&#39;)) imageNames=($(for name in ${imageUrls[@]}; do echo $name | grep -o -P &#39;[^\\/]+\\.[[:alnum:]]+$&#39;; done)) imagePaths=($(for name in ${imageNames[@]}; do echo &#34;assets/images/posts-2020/${name}&#34;; done)) echo -e &#34;\\nProcessing $postfile...\\n&#34; for index in ${!imageUrls[@]}; do echo -e &#34;${imageUrls[index]}\\n =&gt; ${imagePaths[index]}&#34; curl ${imageUrls[index]} --output ${imagePaths[index]} sed -i &#34;s|${imageUrls[index]}|${imagePaths[index]}|&#34; $postfile done I could then run that against all of the Markdown posts under ./_posts/ with:\nfor post in $(ls _posts/); do ~/scripts/imageMigration.sh $post; done # [tl! .cmd] And the image embeds in the local copy of my posts now all look like this:\n![Clever image title](lhTnVwCO3.png) Brilliant!\n",description:"",url:"https://runtimeterror.dev/script-to-update-image-embed-links-in-markdown-files/"},"https://runtimeterror.dev/federated-matrix-server-synapse-on-oracle-clouds-free-tier/":{title:"Federated Matrix Server (Synapse) on Oracle Cloud's Free Tier",tags:["caddy","docker","linux","cloud","containers","chat","selfhosting"],content:`I've heard a lot lately about how generous Oracle Cloud's free tier↗ is, particularly when compared with the free offerings↗ from other public cloud providers. Signing up for an account was fairly straight-forward, though I did have to wait a few hours for an actual human to call me on an actual telephone to verify my account. Once in, I thought it would be fun to try building my own Matrix↗ homeserver to really benefit from the network's decentralized-but-federated model for secure end-to-end encrypted communications.
There are two primary projects for Matrix homeservers: Synapse↗ and Dendrite↗. Dendrite is the newer, more efficient server, but it's not quite feature complete. I'll be using Synapse for my build to make sure that everything works right off the bat, and I will be running the server in a Docker container to make it (relatively) easy to replace if I feel more comfortable about Dendrite in the future.
As usual, it took quite a bit of fumbling about before I got everything working correctly. Here I'll share the steps I used to get up and running.
Instance creation Getting a VM spun up on Oracle Cloud was a pretty simple process. I logged into my account, navigated to Menu -&gt; Compute -&gt; Instances, and clicked on the big blue Create Instance button. I'll be hosting this for my bowdre.net domain, so I start by naming the instance accordingly: matrix.bowdre.net. Naming it isn't strictly necessary, but it does help with keeping track of things. The instance defaults to using an Oracle Linux image. I'd rather use an Ubuntu one for this, simply because I was able to find more documentation on getting Synapse going on Debian-based systems. So I hit the Edit button next to Image and Shape, select the Change Image option, pick Canonical Ubuntu from the list of available images, and finally click Select Image to confirm my choice. This will be an Ubuntu 20.04 image running on a VM.Standard.E2.1.Micro instance, which gets a single AMD EPYC 7551 CPU with 2.0GHz base frequency and 1GB of RAM. It's not much, but it's free - and it should do just fine for this project.
I can leave the rest of the options as their defaults, making sure that the instance will be allotted a public IPv4 address. Scrolling down a bit to the Add SSH Keys section, I leave the default Generate a key pair for me option selected, and click the very-important Save Private Key button to download the private key to my computer so that I'll be able to connect to the instance via SSH. Now I can finally click the blue Create Instance button at the bottom of the screen, and just wait a few minutes for it to start up. Once the status shows a big green &quot;Running&quot; square, I'm ready to connect! I'll copy the listed public IP and make a note of the default username (ubuntu). I can then plug the IP, username, and the private key I downloaded earlier into my SSH client (the Secure Shell extension↗ for Google Chrome since I'm doing this from my Pixelbook), and log in to my new VM in The Cloud. DNS setup According to Oracle's docs↗, the public IP assigned to my instance is mine until I terminate the instance. It should even remain assigned if I stop or restart the instance, just as long as I don't delete the virtual NIC attached to it. So I'll skip the ddclient-based dynamic DNS configuration I've used in the past and instead go straight to my registrar's DNS management portal and create a new A record for matrix.bowdre.net with the instance's public IP.
While I'm managing DNS, it might be good to take a look at the requirements for federating my new server↗ with the other Matrix servers out there. I'd like for users identities on my server to be identified by the bowdre.net domain (@user:bowdre.net) rather than the full matrix.bowdre.net FQDN (@user:matrix.bowdre.net is kind of cumbersome). The standard way to do this to leverage .well-known delegation↗, where the URL at http://bowdre.net/.well-known/matrix/server would return a JSON structure telling other Matrix servers how to connect to mine:
{ &#34;m.server&#34;: &#34;matrix.bowdre.net:8448&#34; } I don't currently have another server already handling requests to bowdre.net, so for now I'll add another A record with the same public IP address to my DNS configuration. Requests for both bowdre.net and matrix.bowdre.net will reach the same server instance, but those requests will be handled differently. More on that later.
An alternative to this .well-known delegation would be to use SRV DNS record delegation↗ to accomplish the same thing. I'd create an SRV record for _matrix._tcp.bowdre.net with the data 0 10 8448 matrix.bowdre.net (priority=0, weight=10, port=8448, target=matrix.bowdre.net) which would again let other Matrix servers know where to send the federation traffic for my server. This approach has an advantage of not needing to make any changes on the bowdre.net web server, but it would require the delegated matrix.bowdre.net server to also return a valid certificate for bowdre.net↗. Trying to get a Let's Encrypt certificate for a server name that doesn't resolve authoritatively in DNS sounds more complicated than I want to get into with this project, so I'll move forward with my plan to use the .well-known delegation instead.
But first, I need to make sure that the traffic reaches the server to begin with.
Firewall configuration Synapse listens on port 8008 for connections from messaging clients, and typically uses port 8448 for federation traffic from other Matrix servers. Rather than expose those ports directly, I'm going to put Synapse behind a reverse proxy on HTTPS port 443. I'll also need to allow inbound traffic HTTP port 80 for ACME certificate challenges. I've got two firewalls to contend with: the Oracle Cloud one which blocks traffic from getting into my virtual cloud network, and the host firewall running inside the VM.
I'll tackle the cloud firewall first. From the page showing my instance details, I click on the subnet listed under the Primary VNIC heading: I then look in the Security Lists section and click on the Default Security List: The Ingress Rules section lists the existing inbound firewall exceptions, which by default is basically just SSH. I click on Add Ingress Rules to create a new one. I want this to apply to traffic from any source IP so I enter the CIDR 0.0.0.0/0, and I enter the Destination Port Range as 80,443. I also add a brief description and click Add Ingress Rules. Success! My new ingress rules appear at the bottom of the list. That gets traffic from the internet and to my instance, but the OS is still going to drop the traffic at its own firewall. I'll need to work with iptables to change that. (You typically use ufw to manage firewalls more easily on Ubuntu, but it isn't included on this minimal image and seemed to butt heads with iptables when I tried adding it. I eventually decided it was better to just interact with iptables directly). I'll start by listing the existing rules on the INPUT chain:
sudo iptables -L INPUT --line-numbers # [tl! .cmd] Chain INPUT (policy ACCEPT) # [tl! .nocopy:7] num target prot opt source destination 1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED 2 ACCEPT icmp -- anywhere anywhere 3 ACCEPT all -- anywhere anywhere 4 ACCEPT udp -- anywhere anywhere udp spt:ntp 5 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh 6 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Note the REJECT all statement at line 6. I'll need to insert my new ACCEPT rules for ports 80 and 443 above that implicit deny all:
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT # [tl! .cmd:1] sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT And then I'll confirm that the order is correct:
sudo iptables -L INPUT --line-numbers # [tl! .cmd] Chain INPUT (policy ACCEPT) # [tl! .nocopy:9] num target prot opt source destination 1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED 2 ACCEPT icmp -- anywhere anywhere 3 ACCEPT all -- anywhere anywhere 4 ACCEPT udp -- anywhere anywhere udp spt:ntp 5 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh 6 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https 7 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http 8 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited I can use nmap running from my local Linux environment to confirm that I can now reach those ports on the VM. (They're still &quot;closed&quot; since nothing is listening on the ports yet, but the connections aren't being rejected.)
nmap -Pn matrix.bowdre.net # [tl! .cmd] Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 12:49 CDT # [tl! .nocopy:10] Nmap scan report for matrix.bowdre.net(150.136.6.180) Host is up (0.086s latency). Other addresses for matrix.bowdre.net (not scanned): 2607:7700:0:1d:0:1:9688:6b4 Not shown: 997 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp closed http 443/tcp closed https Nmap done: 1 IP address (1 host up) scanned in 8.44 seconds Cool! Before I move on, I'll be sure to make the rules persistent so they'll be re-applied whenever iptables starts up:
sudo netfilter-persistent save # [tl! .cmd] run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save # [tl! .nocopy:1] run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save Reverse proxy setup I had initially planned on using certbot to generate Let's Encrypt certificates, and then reference the certs as needed from an nginx or Apache reverse proxy configuration. While researching how the proxy would need to be configured to front Synapse↗, I found this sample nginx configuration:
# torchlight! {&#34;lineNumbers&#34;: true} server { listen 443 ssl http2; listen [::]:443 ssl http2; # For the federation port listen 8448 ssl http2 default_server; listen [::]:8448 ssl http2 default_server; server_name matrix.example.com; location ~* ^(\\/_matrix|\\/_synapse\\/client) { proxy_pass http://localhost:8008; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $host; # Nginx by default only allows file uploads up to 1M in size # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml client_max_body_size 50M; } } And this sample Apache one:
# torchlight! {&#34;lineNumbers&#34;: true} &lt;VirtualHost *:443&gt; SSLEngine on ServerName matrix.example.com RequestHeader set &#34;X-Forwarded-Proto&#34; expr=%{REQUEST_SCHEME} AllowEncodedSlashes NoDecode ProxyPreserveHost on ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix ProxyPass /_synapse/client http://127.0.0.1:8008/_synapse/client nocanon ProxyPassReverse /_synapse/client http://127.0.0.1:8008/_synapse/client &lt;/VirtualHost&gt; &lt;VirtualHost *:8448&gt; SSLEngine on ServerName example.com RequestHeader set &#34;X-Forwarded-Proto&#34; expr=%{REQUEST_SCHEME} AllowEncodedSlashes NoDecode ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix &lt;/VirtualHost&gt; I also found this sample config for another web server called Caddy↗:
# torchlight! {&#34;lineNumbers&#34;: true} matrix.example.com { reverse_proxy /_matrix/* http://localhost:8008 reverse_proxy /_synapse/client/* http://localhost:8008 } example.com:8448 { reverse_proxy http://localhost:8008 } One of these looks much simpler than the other two. I'd never heard of Caddy so I did some quick digging, and I found that it would actually handle the certificates entirely automatically↗ - in addition to having a much easier config. Installing Caddy↗ wasn't too bad, either:
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https # [tl! .cmd:4] curl -1sLf &#39;https://dl.cloudsmith.io/public/caddy/stable/gpg.key&#39; | sudo apt-key add - curl -1sLf &#39;https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt&#39; | sudo tee /etc/apt/sources.list.d/caddy-stable.list sudo apt update sudo apt install caddy Then I just need to put my configuration into the default Caddyfile, including the required .well-known delegation piece from earlier.
# torchlight! {&#34;lineNumbers&#34;: true} # /etc/caddy/Caddyfile matrix.bowdre.net { reverse_proxy /_matrix/* http://localhost:8008 reverse_proxy /_synapse/client/* http://localhost:8008 } bowdre.net { route { respond /.well-known/matrix/server \`{&#34;m.server&#34;: &#34;matrix.bowdre.net:443&#34;}\` redir https://virtuallypotato.com } } There's a lot happening in that 11-line Caddyfile, but it's not complicated by any means. The matrix.bowdre.net section is pretty much exactly yanked from the sample config, and it's going to pass any requests that start like matrix.bowdre.net/_matrix/ or matrix.bowdre.net/_synapse/client/ through to the Synapse server listening locally on port 8008. Caddy will automatically request and apply a Let's Encrypt or ZeroSSL cert for any server names spelled out in the config - very slick!
I set up the bowdre.net section to return the appropriate JSON string to tell other Matrix servers to connect to matrix.bowdre.net on port 443 (so that I don't have to open port 8448 through the firewalls), and to redirect all other traffic to one of my favorite technical blogs (maybe you've heard of it?). I had to wrap the respond and redir directives in a route { } block↗ because otherwise Caddy's implicit precedence↗ would execute the redirect for all traffic and never hand out the necessary .well-known data.
(I wouldn't need that section at all if I were using a separate web server for bowdre.net; instead, I'd basically just add that respond /.well-known/matrix/server line to that other server's config.)
Now to enable the caddy service, start it, and restart it so that it loads the new config:
sudo systemctl enable caddy # [tl! .cmd:2] sudo systemctl start caddy sudo systemctl restart caddy If I repeat my nmap scan from earlier, I'll see that the HTTP and HTTPS ports are now open. The server still isn't actually serving anything on those ports yet, but at least it's listening.
nmap -Pn matrix.bowdre.net # [tl! .cmd] Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 13:44 CDT # [tl! .nocopy:9] Nmap scan report for matrix.bowdre.net (150.136.6.180) Host is up (0.034s latency). Not shown: 997 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 443/tcp open https Nmap done: 1 IP address (1 host up) scanned in 5.29 seconds Browsing to https://matrix.bowdre.net shows a blank page - but a valid and trusted certificate that I did absolutely nothing to configure! The .well-known URL also returns the expected JSON: And trying to hit anything else at https://bowdre.net brings me right back here.
And again, the config to do all this (including getting valid certs for two server names!) is just 11 lines long. Caddy is seriously and magically cool.
Okay, let's actually serve something up now.
Synapse installation Docker setup Before I can get on with deploying Synapse in Docker↗, I first need to install Docker↗ on the system:
sudo apt-get install \\ # [tl! .cmd] apt-transport-https \\ ca-certificates \\ curl \\ gnupg \\ lsb-release curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \\ # [tl! .cmd] sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo \\ # [tl! .cmd] &#34;deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \\ $(lsb_release -cs) stable&#34; | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null sudo apt update # [tl! .cmd:1] sudo apt install docker-ce docker-ce-cli containerd.io I'll also install Docker Compose↗:
sudo curl -L &#34;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&#34; \\ # [tl! .cmd] -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose # [tl! .cmd] And I'll add my ubuntu user to the docker group so that I won't have to run every docker command with sudo:
sudo usermod -G docker -a ubuntu # [tl! .cmd] I'll log out and back in so that the membership change takes effect, and then test both docker and docker-compose to make sure they're working:
docker --version # [tl! .cmd] Docker version 20.10.7, build f0df350 # [tl! .nocopy:1] docker-compose --version # [tl! .cmd] docker-compose version 1.29.2, build 5becea4c # [tl! .nocopy] Synapse setup Now I'll make a place for the Synapse installation to live, including a data folder that will be mounted into the container:
sudo mkdir -p /opt/matrix/synapse/data # [tl! .cmd:1] cd /opt/matrix/synapse And then I'll create the compose file to define the deployment:
# torchlight! {&#34;lineNumbers&#34;: true} # /opt/matrix/synapse/docker-compose.yaml services: synapse: container_name: &#34;synapse&#34; image: &#34;matrixdotorg/synapse&#34; restart: &#34;unless-stopped&#34; ports: - &#34;127.0.0.1:8008:8008&#34; volumes: - &#34;./data/:/data/&#34; Before I can fire this up, I'll need to generate an initial configuration as described in the documentation↗. Here I'll specify the server name that I'd like other Matrix servers to know mine by (bowdre.net):
docker run -it --rm \\ # [tl! .cmd] -v &#34;/opt/matrix/synapse/data:/data&#34; \\ -e SYNAPSE_SERVER_NAME=bowdre.net \\ -e SYNAPSE_REPORT_STATS=yes \\ matrixdotorg/synapse generate # [tl! .nocopy:start] Unable to find image &#39;matrixdotorg/synapse:latest&#39; locally latest: Pulling from matrixdotorg/synapse 69692152171a: Pull complete 66a3c154490a: Pull complete 3e35bdfb65b2: Pull complete f2c4c4355073: Pull complete 65d67526c337: Pull complete 5186d323ad7f: Pull complete 436afe4e6bba: Pull complete c099b298f773: Pull complete 50b871f28549: Pull complete Digest: sha256:5ccac6349f639367fcf79490ed5c2377f56039ceb622641d196574278ed99b74 Status: Downloaded newer image for matrixdotorg/synapse:latest Creating log config /data/bowdre.net.log.config Generating config file /data/homeserver.yaml Generating signing key file /data/bowdre.net.signing.key A config file has been generated in &#39;/data/homeserver.yaml&#39; for server name &#39;bowdre.net&#39;. Please review this file and customise it to your needs. # [tl! .nocopy:end] As instructed, I'll use sudo vi data/homeserver.yaml to review/modify the generated config. I'll leave
server_name: &#34;bowdre.net&#34; since that's how I'd like other servers to know my server, and I'll uncomment/edit in:
public_baseurl: https://matrix.bowdre.net since that's what users (namely, me) will put into their Matrix clients to connect.
And for now, I'll temporarily set:
enable_registration: true so that I can create a user account without fumbling with the CLI. I'll be sure to set enable_registration: false again once I've registered the account(s) I need to have on my server. The instance has limited resources so it's probably not a great idea to let just anybody create an account on it.
There are a bunch of other useful configurations that can be made here, but these will do to get things going for now.
Time to start it up:
docker-compose up -d # [tl! .cmd] Creating network &#34;synapse_default&#34; with the default driver # [tl! .nocopy:1] Creating synapse ... done And use docker ps to confirm that it's running:
docker ps # [tl! .cmd] CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES # [tl! .nocopy:1] 573612ec5735 matrixdotorg/synapse &#34;/start.py&#34; 25 seconds ago Up 23 seconds (healthy) 8009/tcp, 127.0.0.1:8008-&gt;8008/tcp, 8448/tcp synapse Testing And I can point my browser to https://matrix.bowdre.net/_matrix/static/ and see the Matrix landing page: Before I start trying to connect with a client, I'm going to plug the server address in to the Matrix Federation Tester↗ to make sure that other servers will be able to talk to it without any problems: And I can view the JSON report at the bottom of the page to confirm that it's correctly pulling my .well-known delegation:
{ &#34;WellKnownResult&#34;: { &#34;m.server&#34;: &#34;matrix.bowdre.net:443&#34;, &#34;CacheExpiresAt&#34;: 0 }, } Now I can fire up my Matrix client of choice↗), specify my homeserver using its full FQDN, and register↗ a new user account: (Once my account gets created, I go back to edit /opt/matrix/synapse/data/homeserver.yaml again and set enable_registration: false, then fire a docker-compose restart command to restart the Synapse container.)
Wrap-up And that's it! I now have my own Matrix server, and I can use my new account for secure chats with Matrix users on any other federated homeserver. It works really well for directly messaging other individuals, and also for participating in small group chats. The server does kind of fall on its face if I try to join a massively-populated (like 500+ users) room, but I'm not going to complain about that too much on a free-tier server.
All in, I'm pretty pleased with how this little project turned out, and I learned quite a bit along the way. I'm tremendously impressed by Caddy's power and simplicity, and I look forward to using it more in future projects.
Update: Updating After a while, it's probably a good idea to update both the Ubntu server and the Synapse container running on it. Updating the server itself is as easy as:
sudo apt update # [tl! .cmd:1] sudo apt upgrade Here's what I do to update the container:
# Move to the working directory # [tl! .nocopy] cd /opt/matrix/synapse # [tl! .cmd] # Pull a new version of the synapse image # [tl! .nocopy] docker-compose pull # [tl! .cmd] # Stop the container # [tl! .nocopy] docker-compose down # [tl! .cmd] # Start it back up without the old version # [tl! .nocopy] docker-compose up -d --remove-orphans # [tl! .cmd] # Periodically remove the old docker images # [tl! .nocopy] docker image prune # [tl! .cmd] `,description:"",url:"https://runtimeterror.dev/federated-matrix-server-synapse-on-oracle-clouds-free-tier/"},"https://runtimeterror.dev/adding-vm-notes-and-custom-attributes-with-vra8/":{title:"Adding VM Notes and Custom Attributes with vRA8",tags:["vmware","vra","vro","javascript"],content:`In past posts, I started by creating a basic deployment infrastructure in Cloud Assembly and using tags to group those resources. I then wrote an integration to let vRA8 use phpIPAM for static address assignments. I implemented a vRO workflow for generating unique VM names which fit an organization's established naming standard, and then extended the workflow to avoid any naming conflicts in Active Directory and DNS. And, finally, I created an intelligent provisioning request form in Service Broker to make it easy for users to get the servers they need. That's got the core functionality pretty well sorted, so moving forward I'll be detailing additions that enable new capabilities and enhance the experience.
In this post, I'll describe how to get certain details from the Service Broker request form and into the VM's properties in vCenter. The obvious application of this is adding descriptive notes so I can remember what purpose a VM serves, but I will also be using Custom Attributes↗ to store the server's Point of Contact information and a record of which ticketing system request resulted in the server's creation.
New inputs I'll start this by adding a few new inputs to the cloud template in Cloud Assembly. I'm using a basic regex on the poc_email field to make sure that the user's input is probably a valid email address in the format [some string]@[some string].[some string].
inputs: [...] description: type: string title: Description description: Server function/purpose default: Testing and evaluation poc_name: type: string title: Point of Contact Name default: Jack Shephard poc_email: type: string title: Point of Contact Email default: [email protected] pattern: &#39;^[^\\s@]+@[^\\s@]+\\.[^\\s@]+$&#39; # [tl! highlight] ticket: type: string title: Ticket/Request Number default: 4815162342 [...] I'll also need to add these to the resources section of the template so that they will get passed along with the deployment properties.
I'm actually going to combine the poc_name and poc_email fields into a single poc string.
resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine properties: &lt;...&gt; poc: &#39;\${input.poc_name + &#34; (&#34; + input.poc_email + &#34;)&#34;}&#39; # [tl! highlight] ticket: &#39;\${input.ticket}&#39; description: &#39;\${input.description}&#39; &lt;...&gt; I'll save this as a new version so that the changes will be available in the Service Broker front-end. Service Broker custom form I can then go to Service Broker and drag the new fields onto the Custom Form canvas. (If the new fields don't show up, hit up the Content Sources section of Service Broker, select the content source, and click the &quot;Save and Import&quot; button to sync the changes.) While I'm at it, I set the Description field to display as a text area (encouraging more detailed input), and I also set all the fields on the form to be required. vRO workflow Okay, so I've got the information I want to pass on to vCenter. Now I need to whip up a new workflow in vRO that will actually do that (after telling vRO how to connect to the vCenter, of course). I'll want to call this after the VM has been provisioned, so I'll cleverly call the workflow &quot;VM Post-Provisioning&quot;. The workflow will have a single input from vRA, inputProperties of type Properties. The first thing this workflow needs to do is parse inputProperties (Properties) to get the name of the VM, and it will then use that information to query vCenter and grab the corresponding VM object. So I'll add a scriptable task item to the workflow canvas and call it Get VM Object. It will take inputProperties (Properties) as its sole input, and output a new variable called vm of type VC:VirtualMachine. The script for this task is fairly straightforward:
// torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: Get VM Object // Inputs: inputProperties (Properties) // Outputs: vm (VC:VirtualMachine) var name = inputProperties.resourceNames[0] var vms = VcPlugin.getAllVirtualMachines(null, name) System.log(&#34;Found VM object: &#34; + vms[0]) vm = vms[0] I'll add another scriptable task item to the workflow to actually apply the notes to the VM - I'll call it Set Notes, and it will take both vm (VC:VirtualMachine) and inputProperties (Properties) as its inputs. The first part of the script creates a new VM config spec, inserts the description into the spec, and then reconfigures the selected VM with the new spec.
The second part uses a built-in action to set the Point of Contact and Ticket custom attributes accordingly.
// torchlight! {&#34;lineNumbers&#34;: true} // Javascript: Set Notes // Inputs: vm (VC:VirtualMachine), inputProperties (Properties) // Outputs: None var notes = inputProperties.customProperties.description var poc = inputProperties.customProperties.poc var ticket = inputProperties.customProperties.ticket var spec = new VcVirtualMachineConfigSpec() spec.annotation = notes vm.reconfigVM_Task(spec) System.getModule(&#34;com.vmware.library.vc.customattribute&#34;).setOrCreateCustomField(vm,&#34;Point of Contact&#34;, poc) // [tl! highlight:2] System.getModule(&#34;com.vmware.library.vc.customattribute&#34;).setOrCreateCustomField(vm,&#34;Ticket&#34;, ticket) Extensibility subscription Now I need to return to Cloud Assembly and create a new extensibility subscription that will call this new workflow at the appropriate time. I'll call it &quot;VM Post-Provisioning&quot; and attach it to the &quot;Compute Post Provision&quot; topic. And then I'll link it to my new workflow: Testing And then back to Service Broker to request a VM and see if it works:
It worked! In the future, I'll be exploring more features that I can add on to this &quot;VM Post-Provisioning&quot; workflow like creating static DNS records as needed.
`,description:"",url:"https://runtimeterror.dev/adding-vm-notes-and-custom-attributes-with-vra8/"},"https://runtimeterror.dev/adguard-home-in-docker-on-photon-os/":{title:"AdGuard Home in Docker on Photon OS",tags:["docker","vmware","containers","networking","security","selfhosting"],content:`I was recently introduced to AdGuard Home↗ by way of its very slick Home Assistant Add-On↗. Compared to the relatively-complicated Pi-hole↗ setup that I had implemented several months back, AdGuard Home was much simpler to deploy (particularly since I basically just had to click the &quot;Install&quot; button from the Home Assistant add-ons manage). It also has a more modern UI with options arranged more logically (to me, at least), and it just feels easier to use overall. It worked great for a time... until my Home Assistant instance crashed, taking down AdGuard Home (and my internet access) with it. Maybe bundling these services isn't the best move.
I'd like to use AdGuard Home, but the system it runs on needs to be rock-solid. With that in mind, I thought it might be fun to instead run AdGuard Home in a Docker container on a VM running VMware's container-optimized Photon OS↗, primarily because I want an excuse to play more with Docker and Photon (but also the thing I just mentioned about stability). So here's what it took to get that running.
Deploy Photon First, up: getting Photon. There are a variety of delivery formats available here↗, and I opted for the HW13 OVA version. I copied that download URL:
https://packages.vmware.com/photon/4.0/GA/ova/photon-hw13-uefi-4.0-1526e30ba0.ova Then I went into vCenter, hit the Deploy OVF Template option, and pasted in the URL: This lets me skip the kind of tedious &quot;download file from internet and then upload file to vCenter&quot; dance, and I can then proceed to click through the rest of the deployment options. Once the VM is created, I power it on and hop into the web console. The default root username is changeme, and I'll of course be forced to change that the first time I log in.
Configure Networking My next step was to configure a static IP address by creating /etc/systemd/network/10-static-en.network and entering the following contents:
[Match] Name=eth0 [Network] Address=192.168.1.2/24 Gateway=192.168.1.1 DNS=192.168.1.5 By the way, that 192.168.1.5 address is my Windows DC/DNS server that I use for my homelab environment. That's the DNS server that's configured on my Google Wifi router, and it will continue to handle resolution for local addresses.
I also disabled DHCP by setting DHCP=no in /etc/systemd/network/99-dhcp-en.network:
[Match] Name=e* [Network] DHCP=no # [tl! highlight] IPv6AcceptRA=no I set the required permissions on my new network configuration file with chmod 644 /etc/systemd/network/10-static-en.network and then restarted networkd with systemctl restart systemd-networkd.
I then ran networkctl a couple of times until the eth0 interface went fully green, and did an ip a to confirm that the address had been applied. One last little bit of housekeeping is to change the hostname with hostnamectl set-hostname adguard and then reboot for good measure. I can then log in via SSH to continue the setup. Now that I'm in, I run tdnf update to make sure the VM is fully up to date.
Install docker-compose Photon OS ships with Docker preinstalled, but I need to install docker-compose on my own to simplify container deployment. Per the install instructions↗, I run:
curl -L &#34;https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)&#34; -o /usr/local/bin/docker-compose # [tl! .cmd_root:1] chmod +x /usr/local/bin/docker-compose And then verify that it works:
docker-compose --version # [tl! .cmd_root] docker-compose version 1.29.2, build 5becea4c # [tl! .nocopy] I'll also want to enable and start Docker:
systemctl enable docker # [tl! .cmd_root:1] systemctl start docker Disable DNSStubListener By default, the resolved daemon is listening on 127.0.0.53:53 and will prevent docker from binding to that port. Fortunately it's pretty easy↗ to disable the DNSStubListener and free up the port:
sed -r -i.orig &#39;s/#?DNSStubListener=yes/DNSStubListener=no/g&#39; /etc/systemd/resolved.conf # [tl! .cmd_root:2] rm /etc/resolv.conf &amp;&amp; ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf systemctl restart systemd-resolved Deploy AdGuard Home container Okay, now for the fun part.
I create a directory for AdGuard to live in, and then create a docker-compose.yaml therein:
mkdir ~/adguard # [tl! .cmd_root:2] cd ~/adguard vi docker-compose.yaml And I define the container:
# torchlight! {&#34;lineNumbers&#34;: true} version: &#34;3&#34; services: adguard: container_name: adguard restart: unless-stopped image: adguard/adguardhome:latest ports: - &#34;53:53/tcp&#34; - &#34;53:53/udp&#34; - &#34;67:67/udp&#34; - &#34;68:68/tcp&#34; - &#34;68:68/udp&#34; - &#34;80:80/tcp&#34; - &#34;443:443/tcp&#34; - &#34;853:853/tcp&#34; - &#34;3000:3000/tcp&#34; volumes: - &#39;./workdir:/opt/adguardhome/work&#39; - &#39;./confdir:/opt/adguardhome/conf&#39; cap_add: - NET_ADMIN Then I can fire it up with docker-compose up --detach:
docker-compose up --detach # [tl! .cmd_root focus:start] Creating network &#34;adguard_default&#34; with the default driver # [tl! .nocopy:start] Pulling adguard (adguard/adguardhome:latest)... latest: Pulling from adguard/adguardhome # [tl! focus:end] 339de151aab4: Pull complete 4db4be09618a: Pull complete 7e918e810e4e: Pull complete bfad96428d01: Pull complete Digest: sha256:de7d791b814560663fe95f9812fca2d6dd9d6507e4b1b29926cc7b4a08a676ad # [tl! focus:3] Status: Downloaded newer image for adguard/adguardhome:latest Creating adguard ... done # [tl! .nocopy:end] Post-deploy configuration Next, I point a web browser to http://adguard.lab.bowdre.net:3000 to perform the initial (minimal) setup: Once that's done, I can log in to the dashboard at http://adguard.lab.bowdre.net/login.html: AdGuard Home ships with pretty sensible defaults so there's not really a huge need to actually do a lot of configuration. Any changes that I do do will be saved in ~/adguard/confdir/AdGuardHome.yaml so they will be preserved across container changes.
Getting requests to AdGuard Home Normally, you'd tell your Wifi router what DNS server you want to use, and it would relay that information to the connected DHCP clients. Google Wifi is a bit funny, in that it wants to function as a DNS proxy for the network. When you configure a custom DNS server for Google Wifi, it still tells the DHCP clients to send the requests to the router, and the router then forwards the queries on to the configured DNS server.
I already have Google Wifi set up to use my Windows DC (at 192.168.1.5) for DNS. That lets me easily access systems on my internal lab.bowdre.net domain without having to manually configure DNS, and the DC forwards resolution requests it can't handle on to the upstream (internet) DNS servers.
To easily insert my AdGuard Home instance into the flow, I pop in to my Windows DC and configure the AdGuard Home address (192.168.1.2) as the primary DNS forwarder. The DC will continue to handle internal resolutions, and anything it can't handle will now get passed up the chain to AdGuard Home. And this also gives me a bit of a failsafe, in that queries will fail back to the previously-configured upstream DNS if AdGuard Home doesn't respond within a few seconds. It's working! Caveat Chaining my DNS configurations in this way (router -&gt; DC -&gt; AdGuard Home -&gt; internet) does have a bit of a limitation, in that all queries will appear to come from the Windows server: I won't be able to do any per-client filtering as a result, but honestly I'm okay with that as I already use the &quot;Pause Internet&quot; option in Google Wifi to block outbound traffic from certain devices anyway. And using the Windows DNS as an intermediary makes it significantly quicker and easier to switch things up if I run into problems later; changing the forwarder here takes effect instantly rather than having to manually update all of my clients or wait for DHCP to distribute the change.
I have worked around this in the past by bypassing Google Wifi's DHCP↗ but I think it was actually more trouble than it was worth to me.
One last thing... I'm putting a lot of responsibility on both of these VMs, my Windows DC and my new AdGuard Home instance. If they aren't up, I won't have internet access, and that would be a shame. I already have my ESXi host configured to automatically start up when power is (re)applied, so I also adjust the VM Startup/Shutdown Configuration so that AdGuard Home will automatically boot after ESXi is loaded, followed closely by the Windows DC (and the rest of my virtualized infrastructure): So there you have it. Simple DNS-based ad-blocking running on a minimal container-optimized VM that should be more stable than the add-on tacked on to my Home Assistant instance. Enjoy!
`,description:"",url:"https://runtimeterror.dev/adguard-home-in-docker-on-photon-os/"},"https://runtimeterror.dev/vra8-automatic-deployment-naming-another-take/":{title:"vRA8 Automatic Deployment Naming - Another Take",tags:["vmware","vra","vro","javascript"],content:`A few days ago, I shared how I combined a Service Broker Custom Form with a vRO action to automatically generate a unique and descriptive deployment name based on user inputs. That approach works fine but while testing some other components I realized that calling that action each time a user makes a selection isn't necessarily ideal. After a bit of experimentation, I settled on what I believe to be a better solution.
Instead of setting the &quot;Deployment Name&quot; field to use an External Source (vRO), I'm going to configure it to use a Computed Value. This is a bit less flexible, but all the magic happens right there in the form without having to make an expensive vRO call. After setting Value source to Computed value, I also set the Operation to Concatenate (since it is, after all, the only operation choice. I can then use the Add Value button to add some fields. Each can be either a Constant (like a separator) or linked to a Field on the request form. By combining those, I can basically reconstruct the same arrangement that I was previously generating with vRO: So this will generate a name that looks something like [user]_[catalog_item]_[site]-[env][function]-[app], all without having to call vRO! That gets me pretty close to what I want... but there's always the chance that the generated name won't be truly unique. Being able to append a timestamp on to the end would be a big help here.
That does mean that I'll need to add another vRO call, but I can set this up so that it only gets triggered once, when the form loads, instead of refreshing each time the inputs change.
So I hop over to vRO and create a new action, which I call getTimestamp. It doesn't require any inputs, and returns a single string. Here's the code:
// torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: getTimestamp action // Inputs: None // Returns: result (String) var date = new Date(); var result = date.toISOString(); return result I then drag a Text Field called Timestamp onto the Service Broker Custom Form canvas, and set it to not be visible: And I set it to pull its value from my new net.bowdre.utility/getTimestamp action: Now when the form loads, this field will store a timestamp with thousandths-of-a-second precision.
The last step is to return to the Deployment Name field and link in the new Timestamp field so that it will get tacked on to the end of the generated name. The old way looked like this, where it had to churn a bit after each selection: Here's the newer approach, which feels much snappier: Not bad! Now I can make the Deployment Name field hidden again and get back to work!
`,description:"",url:"https://runtimeterror.dev/vra8-automatic-deployment-naming-another-take/"},"https://runtimeterror.dev/vra8-custom-provisioning-part-four/":{title:"vRA8 Custom Provisioning: Part Four",tags:["vmware","vra","vro","javascript"],content:`My last post in this series marked the completion of the vRealize Orchestrator workflow that I use for pre-provisioning tasks, namely generating a unique sequential hostname which complies with a defined naming standard and doesn't conflict with any existing records in vSphere, Active Directory, or DNS. That takes care of many of the &quot;back-end&quot; tasks for a simple deployment.
This post will add in some &quot;front-end&quot; operations, like creating a customized VM request form in Service Broker and dynamically populating a drop-down with a list of networks available at the user-selected deployment site. I'll also take care of some housekeeping items like automatically generating a unique deployment name.
Getting started with Service Broker Custom Forms So far, I've been working either in the Cloud Assembly or Orchestrator UIs, both of which are really geared toward administrators. Now I'm going to be working with Service Broker which will provide the user-facing front-end. This is where &quot;normal&quot; users will be able to submit provisioning requests without having to worry about any of the underlying infrastructure or orchestration.
Before I can do anything with my Cloud Template in the Service Broker UI, though, I'll need to release it from Cloud Assembly. I do this by opening the template on the Design tab and clicking the Version button at the bottom of the screen. I'll label this as 1.0 and tick the checkbox to Release this version to the catalog. I can then go to the Service Broker UI and add a new Content Source for my Cloud Assembly templates. After hitting the Create &amp; Import button, all released Cloud Templates in the selected Project will show up in the Service Broker Content section: In order for users to deploy from this template, I also need to go to Content Sharing, select the Project, and share the content. This can be done either at the Project level or by selecting individual content items. That template now appears on the Service Broker Catalog tab: That's cool and all, and I could go ahead and request a deployment off of that catalog item right now - but I'm really interested in being able to customize the request form. I do that by clicking on the little three-dot menu icon next to the Content entry and selecting the Customize form option. When you start out, the custom form kind of jumbles up the available fields. So I'm going to start by dragging-and-dropping the fields to resemble the order defined in the Cloud Template: In addition to rearranging the request form fields, Custom Forms also provide significant control over how the form behaves. You can change how a field is displayed, define default values, make fields dependent upon other fields and more. For instance, all of my templates and resources belong to a single project so making the user select the project (from a set of 1) is kind of redundant. Every deployment has to be tied to a project so I can't just remove that field, but I can select the &quot;Project&quot; field on the canvas and change its Visibility to &quot;No&quot; to hide it. It will silently pass along the correct project ID in the background without cluttering up the form. How about that Deployment Name field? In my tests, I'd been manually creating a string of numbers to uniquely identify the deployment, but I'm not going to ask my users to do that. Instead, I'll leverage another great capability of Custom Forms - tying a field value to a result of a custom vRO action!
Automatic deployment naming [Update] I've since come up with what I think is a better approach to handling this. Check it out here!
That means it's time to dive back into the vRealize Orchestrator interface and whip up a new action for this purpose. I created a new action within my existing net.bowdre.utility module called createDeploymentName. A good deployment name must be globally unique, and it would be great if it could also convey some useful information like who requested the deployment, which template it is being deployed from, and the purpose of the server. The siteCode (String), envCode (String), functionCode (String), and appCode (String) variables from the request form will do a great job of describing the server's purpose. I can also pass in some additional information from the Service Broker form like catalogItemName (String) to get the template name and requestedByName (String) to identify the user making the request. So I'll set all those as inputs to my action: I also went ahead and specified that the action will return a String.
And now for the code. I really just want to mash all those variables together into a long string, and I'll also add a timestamp to make sure each deployment name is truly unique.
// torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: createDeploymentName // Inputs: catalogItemName (String), requestedByName (String), siteCode (String), // envCode (String), functionCode (String), appCode (String) // Returns: deploymentName (String) var deploymentName = &#39;&#39; // we don&#39;t want to evaluate this until all requested fields have been completed if (catalogItemName != &#39;&#39; &amp;&amp; requestedByName != null &amp;&amp; siteCode != null &amp;&amp; envCode != null &amp;&amp; functionCode != null &amp;&amp; appCode != null) { var date = new Date() deploymentName = requestedByName + &#34;_&#34; + catalogItemName + &#34;_&#34; + siteCode + &#34;-&#34; + envCode.substring(0,1) + functionCode + &#34;-&#34; + appCode.toUpperCase() + &#34;-(&#34; + date.toISOString() + &#34;)&#34; System.debug(&#34;Returning deploymentName: &#34; + deploymentName) } return deploymentName With that sorted, I can go back to the Service Broker interface to modify the custom form a bit more. I select the &quot;Deployment Name&quot; field and click over to the Values tab on the right. There, I set the Value source to &quot;External source&quot; and Select action to the new action I just created, net.bowdre.utility/createDeploymentName. (If the action doesn't appear in the search field, go to Infrastructure &gt; Integrations &gt; Embedded-VRO and click the &quot;Start Data Collection&quot; button to force vRA to update its inventory of vRO actions and workflows.) I then map all the action's inputs to properties available on the request form. The last step before testing is to click that Enable button to activate the custom form, and then the Save button to save my work. So did it work? Let's head to the Catalog tab and open the request: Cool! So it's dynamically generating the deployment name based on selections made on the form. Now that it works, I can go back to the custom form and set the &quot;Deployment Name&quot; field to be invisible just like the &quot;Project&quot; one.
Per-site network selection So far, vRA has been automatically placing VMs on networks based solely on which networks are tagged as available for the selected site. I'd like to give my users a bit more control over which network their VMs get attached to, particularly as some networks may be set aside for different functions or have different firewall rules applied.
As a quick recap, I've got five networks available for vRA, split across my two sites using tags:
Name Subnet Site Tags d1620-Servers-1 172.16.20.0/24 BOW net:bow d1630-Servers-2 172.16.30.0/24 BOW net:bow d1640-Servers-3 172.16.40.0/24 BOW net:bow d1650-Servers-4 172.16.50.0/24 DRE net:dre d1660-Servers-5 172.16.60.0/24 DRE net:dre I'm going to add additional tags to these networks to further define their purpose.
Name Purpose Tags d1620-Servers-1 Management net:bow, net:mgmt d1630-Servers-2 Front-end net:bow, net:front d1640-Servers-3 Back-end net:bow, net:back d1650-Servers-4 Front-end net:dre, net:front d1660-Servers-5 Back-end net:dre, net:back I could just use those tags to let users pick the appropriate network, but I've found that a lot of times users don't know why they're picking a certain network, they just know the IP range they need to use. So I'll take it a step further and add a giant tag to include the Site, Purpose, and Subnet, and this is what will ultimately be presented to the users:
Name Tags d1620-Servers-1 net:bow, net:mgmt, net:bow-mgmt-172.16.20.0 d1630-Servers-2 net:bow, net:front, net:bow-front-172.16.30.0 d1640-Servers-3 net:bow, net:back, net:bow-back-172.16.40.0 d1650-Servers-4 net:dre, net:front, net:dre-front-172.16.50.0 d1660-Servers-5 net:dre, net:back, net:dre-back-172.16.60.0 So I can now use a single tag to positively identify a single network, as long as I know its site and either its purpose or its IP space. I'll reference these tags in a vRO action that will populate a dropdown in the request form with the available networks for the selected site. Unfortunately I couldn't come up with an easy way to dynamically pull the tags into vRO so I create another Configuration Element to store them: This gets filed under the existing CustomProvisioning folder, and I name it networksPerSite. Each site gets a new variable of type Array/string. The name of the variable matches the site ID, and the contents are just the tags minus the net: prefix.
I created a new action named (appropriately) getNetworksForSite. This will accept siteCode (String) as its input from the Service Broker request form, and will return an array of strings containing the available networks. // torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: getNetworksForSite // Inputs: siteCode (String) // Returns: site.value (Array/String) // Get networksPerSite configurationElement var category = Server.getConfigurationElementCategoryWithPath(&#34;CustomProvisioning&#34;) var elements = category.configurationElements for (var i in elements) { if (elements[i].name == &#34;networksPerSite&#34;) { var networksPerSite = elements[i] } } // Lookup siteCode and find available networks try { var site = networksPerSite.getAttributeWithKey(siteCode) } catch (e) { System.debug(&#34;Invalid site.&#34;); } finally { return site.value } Back in Cloud Assembly, I edit the Cloud Template to add an input field called network:
inputs: [...] network: type: string title: Network [...] and update the resource configuration for the network entity to constrain it based on input.network instead of input.site as before:
# torchlight! {&#34;lineNumbers&#34;: true} resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine properties: &lt;...&gt; networks: - network: &#39;\${resource.Cloud_vSphere_Network_1.id}&#39; assignment: static constraints: - tag: &#39;comp:\${to_lower(input.site)}&#39; Cloud_vSphere_Network_1: type: Cloud.vSphere.Network properties: networkType: existing constraints: # - tag: &#39;net:\${to_lower(input.site)}&#39; - tag: &#39;net:\${input.network}&#39; Remember that the networksPerSite configuration element contains the portion of the tags after the net: prefix so that's why I include the prefix in the constraint tag here. I just didn't want it to appear in the selection dropdown.
After making this change to the Cloud Template I use the &quot;Create Version&quot; button again to create a new version and tick the option to release it so that it can be picked up by Service Broker. Back on the Service Broker UI, I hit my LAB Content Source again to Save &amp; Import the new change, and then go to customize the form for WindowsDemo again. After dragging-and-dropping the new Network field onto the request form blueprint, I kind of repeat the steps I used for adjusting the Deployment Name field earlier. On the Appearance tab I set it to be a DropDown, and on the Values tab I set it to an external source, net.bowdre.utility/getNetworksForSite. This action only needs a single input so I map Site on the request form to the siteCode input. Now I can just go back to the Catalog tab and request a new deployment to check out my-- Oh yeah. That vRO action gets called as soon as the request form loads - before selecting the required site code as an input. I could modify the action so that returns an empty string if the site hasn't been selected yet, but I'm kind of lazy so I'll instead just modify the custom form so that the Site field defaults to the BOW site. Now I can open up the request form and see how well it works: Noice!
Putting it all together now At this point, I'll actually kick off a deployment and see how everything works out. After hitting Submit, I can see that this deployment has a much more friendly name than the previous ones: And I can also confirm that the VM got named appropriately (based on the naming standard I implemented earlier), and it also got placed on the 172.16.60.0/24 network I selected. Very slick. And I think that's a great stopping point for today.
Coming up, I'll describe how I create AD computer objects in site-specific OUs, add notes and custom attributes to the VM in vSphere, and optionally create static DNS records on a Windows DNS server.
`,description:"",url:"https://runtimeterror.dev/vra8-custom-provisioning-part-four/"},"https://runtimeterror.dev/automatic-unattended-expansion-of-linux-root-lvm-volume-to-fill-disk/":{title:"Automatic unattended expansion of Linux root LVM volume to fill disk",tags:["linux","shell","automation"],content:`While working on my vRealize Automation 8 project, I wanted to let users specify how large a VM's system drive should be and have vRA apply that without any further user intervention. For instance, if the template has a 60GB C: drive and the user specifies that they want it to be 80GB, vRA will embiggen the new VM's VMDK to 80GB and then expand the guest file system to fill up the new free space.
I'll get into the details of how that's implemented from the vRA side #soon, but first I needed to come up with simple scripts to extend the guest file system to fill the disk.
This was pretty straight-forward on Windows with a short PowerShell script to grab the appropriate volume and resize it to its full capacity:
$Partition = Get-Volume -DriveLetter C | Get-Partition $Partition | Resize-Partition -Size ($Partition | Get-PartitionSupportedSize).sizeMax It was a bit trickier for Linux systems though. My Linux templates all use LVM to abstract the file systems away from the physical disks, but they may have a different number of physical partitions or different names for the volume groups and logical volumes. So I needed to be able to automagically determine which logical volume was mounted as /, which volume group it was a member of, and which partition on which disk is used for that physical volume. I could then expand the physical partition to fill the disk, expand the volume group to fill the now-larger physical volume, grow the logical volume to fill the volume group, and (finally) extend the file system to fill the logical volume.
I found a great script here↗ that helped with most of those operations, but it required the user to specify the physical and logical volumes. I modified it to auto-detect those, and here's what I came up with:
MBR only
When I cobbled together this script I was primarily targeting the Enterprise Linux (RHEL, CentOS) systems that I work with in my environment, and those happened to have MBR partition tables. This script would need to be modified a bit to work with GPT partitions like you might find on Ubuntu.
# torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash # This will attempt to automatically detect the LVM logical volume where / is mounted and then # expand the underlying physical partition, LVM physical volume, LVM volume group, LVM logical # volume, and Linux filesystem to consume new free space on the disk. # Adapted from https://github.com/alpacacode/Homebrewn-Scripts/blob/master/linux-scripts/partresize.sh extenddisk() { echo -e &#34;\\n+++Current partition layout of $disk:+++&#34; parted $disk --script unit s print if [ $logical == 1 ]; then parted $disk --script rm $ext_partnum parted $disk --script &#34;mkpart extended \${ext_startsector}s -1s&#34; parted $disk --script &#34;set $ext_partnum lba off&#34; parted $disk --script &#34;mkpart logical ext2 \${startsector}s -1s&#34; else parted $disk --script rm $partnum parted $disk --script &#34;mkpart primary ext2 \${startsector}s -1s&#34; fi parted $disk --script set $partnum lvm on echo -e &#34;\\n\\n+++New partition layout of $disk:+++&#34; parted $disk --script unit s print partx -v -a $disk pvresize $pvname lvextend --extents +100%FREE --resize $lvpath echo -e &#34;\\n+++New root partition size:+++&#34; df -h / | grep -v Filesystem } export LVM_SUPPRESS_FD_WARNINGS=1 mountpoint=$(df --output=source / | grep -v Filesystem) # /dev/mapper/centos-root lvdisplay $mountpoint &gt; /dev/null if [ $? != 0 ]; then echo &#34;Error: $mountpoint does not look like a LVM logical volume. Aborting.&#34; exit 1 fi echo -e &#34;\\n+++Current root partition size:+++&#34; df -h / | grep -v Filesystem lvname=$(lvs --noheadings $mountpoint | awk &#39;{print($1)}&#39;) # root vgname=$(lvs --noheadings $mountpoint | awk &#39;{print($2)}&#39;) # centos lvpath=&#34;/dev/\${vgname}/\${lvname}&#34; # /dev/centos/root pvname=$(pvs | grep $vgname | tail -n1 | awk &#39;{print($1)}&#39;) # /dev/sda2 disk=$(echo $pvname | rev | cut -c 2- | rev) # /dev/sda diskshort=$(echo $disk | grep -Po &#39;[^\\/]+$&#39;) # sda partnum=$(echo $pvname | grep -Po &#39;\\d$&#39;) # 2 startsector=$(fdisk -u -l $disk | grep $pvname | awk &#39;{print $2}&#39;) # 2099200 layout=$(parted $disk --script unit s print) # Model: VMware Virtual disk (scsi) Disk /dev/sda: 83886080s Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 2048s 2099199s 2097152s primary xfs boot 2 2099200s 62914559s 60815360s primary lvm if grep -Pq &#34;^\\s$partnum\\s+.+?logical.+$&#34; &lt;&lt;&lt; &#34;$layout&#34;; then logical=1 ext_partnum=$(parted $disk --script unit s print | grep extended | grep -Po &#39;^\\s\\d\\s&#39; | tr -d &#39; &#39;) ext_startsector=$(parted $disk --script unit s print | grep extended | awk &#39;{print $2}&#39; | tr -d &#39;s&#39;) else logical=0 fi parted $disk --script unit s print | if ! grep -Pq &#34;^\\s$partnum\\s+.+?[^,]+?lvm\\s*$&#34;; then echo -e &#34;Error: $pvname seems to have some flags other than &#39;lvm&#39; set.&#34; exit 1 fi if ! (fdisk -u -l $disk | grep $disk | tail -1 | grep $pvname | grep -q &#34;Linux LVM&#34;); then echo -e &#34;Error: $pvname is not the last LVM volume on disk $disk.&#34; exit 1 fi ls /sys/class/scsi_device/*/device/rescan | while read path; do echo 1 &gt; $path; done ls /sys/class/scsi_host/host*/scan | while read path; do echo &#34;- - -&#34; &gt; $path; done extenddisk And it works beautifully within my environment. Hopefully it'll work for yours too in case you have a similar need!
`,description:"",url:"https://runtimeterror.dev/automatic-unattended-expansion-of-linux-root-lvm-volume-to-fill-disk/"},"https://runtimeterror.dev/using-powershell-and-a-scheduled-task-to-apply-windows-updates/":{title:"Using PowerShell and a Scheduled Task to apply Windows Updates",tags:["windows","powershell"],content:`In the same vein as my script to automagically resize a Linux LVM volume to use up free space on a disk, I wanted a way to automatically apply Windows updates for servers deployed by my vRealize Automation environment. I'm only really concerned with Windows Server 2019, which includes the built-in Windows Update Provider PowerShell module↗. So this could be as simple as Install-WUUpdates -Updates (Start-WUScan) to scan for and install any available updates.
Unfortunately, I found that this approach can take a long time to run and often exceeded the timeout limits imposed upon my ABX script, causing the PowerShell session to end and terminating the update process. I really needed a way to do this without requiring a persistent session.
After further experimentation, I settled on using PowerShell to create a one-time scheduled task that would run the updates and reboot, if necessary. I also wanted the task to automatically delete itself after running to avoid cluttering up the task scheduler library - and that last item had me quite stumped until I found this blog post with the solution↗.
So here's what I put together:
# torchlight! {&#34;lineNumbers&#34;: true} # This can be easily pasted into a remote PowerShell session to automatically install any available updates and reboot. # It creates a scheduled task to start the update process after a one-minute delay so that you don&#39;t have to maintain # the session during the process (or have the session timeout), and it also sets the task to automatically delete itself 2 hours later. # # This leverages the Windows Update Provider PowerShell module which is included in Windows 10 1709+ and Windows Server 2019. # # Adapted from https://iamsupergeek.com/self-deleting-scheduled-task-via-powershell/ $action = New-ScheduledTaskAction -Execute &#39;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe&#39; -Argument &#39;-NoProfile -WindowStyle Hidden -Command &#34;&amp; {Install-WUUpdates -Updates (Start-WUScan); if (Get-WUIsPendingReboot) {shutdown.exe /f /r /d p:2:4 /t 120 /c \`&#34;Rebooting to apply updates\`&#34;}}&#34;&#39; $trigger = New-ScheduledTaskTrigger -Once -At ([DateTime]::Now.AddMinutes(1)) $settings = New-ScheduledTaskSettingsSet -Compatibility Win8 -Hidden Register-ScheduledTask -Action $action -Trigger $trigger -Settings $settings -TaskName &#34;Initial_Updates&#34; -User &#34;NT AUTHORITY\\SYSTEM&#34; -RunLevel Highest $task = Get-ScheduledTask -TaskName &#34;Initial_Updates&#34; $task.Triggers[0].StartBoundary = [DateTime]::Now.AddMinutes(1).ToString(&#34;yyyy-MM-dd&#39;T&#39;HH:mm:ss&#34;) $task.Triggers[0].EndBoundary = [DateTime]::Now.AddHours(2).ToString(&#34;yyyy-MM-dd&#39;T&#39;HH:mm:ss&#34;) $task.Settings.AllowHardTerminate = $True $task.Settings.DeleteExpiredTaskAfter = &#39;PT0S&#39; $task.Settings.ExecutionTimeLimit = &#39;PT2H&#39; $task.Settings.Volatile = $False $task | Set-ScheduledTask It creates the task, sets it to run in one minute, and then updates the task's configuration to make it auto-expire and delete two hours later. When triggered, the task installs all available updates and (if necessary) reboots the system after a 2-minute countdown (which an admin could cancel with shutdown /a, if needed). This could be handy for pasting in from a remote PowerShell session and works great when called from a vRA ABX script too!
`,description:"",url:"https://runtimeterror.dev/using-powershell-and-a-scheduled-task-to-apply-windows-updates/"},"https://runtimeterror.dev/vra8-custom-provisioning-part-three/":{title:"vRA8 Custom Provisioning: Part Three",tags:["vmware","vra","vro","javascript","powershell"],content:`Picking up after Part Two, I now have a pretty handy vRealize Orchestrator workflow to generate unique hostnames according to a defined naming standard. It even checks against the vSphere inventory to validate the uniqueness. Now I'm going to take it a step (or two, rather) further and extend those checks against Active Directory and DNS.
Active Directory Adding an AD endpoint Remember how I used the built-in vSphere plugin to let vRO query my vCenter(s) for VMs with a specific name? And how that required first configuring the vCenter endpoint(s) in vRO? I'm going to take a very similar approach here.
So as before, I'll first need to run the preinstalled &quot;Add an Active Directory server&quot; workflow: I fill out the Connection tab like so: I don't have SSL enabled on my homelab AD server so I left that unchecked.
On the Authentication tab, I tick the box to use a shared session and specify the service account I'll use to connect to AD. It would be great for later steps if this account has the appropriate privileges to create/delete computer accounts at least within designated OUs. If you've got multiple AD servers, you can use the options on the Alternative Hosts tab to specify those, saving you from having to create a new configuration for each. I've just got the one AD server in my lab, though, so at this point I just hit Run.
Once it completes successfully, I can visit the Inventory section of the vRO interface to confirm that the new Active Directory endpoint shows up: checkForAdConflict Action Since I try to keep things modular, I'm going to write a new vRO action within the net.bowdre.utility module called checkForAdConflict which can be called from the Generate unique hostname workflow. It will take in computerName (String) as an input and return a boolean True if a conflict is found or False if the name is available. It's basically going to loop through the Active Directory hosts defined in vRO and search each for a matching computer name. Here's the full code:
// torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: checkForAdConflict action // Inputs: computerName (String) // Outputs: (Boolean) var adHosts = AD_HostManager.findAllHosts(); for each (var adHost in adHosts) { var computer = ActiveDirectory.getComputerAD(computerName,adHost); System.log(&#34;Searched AD for: &#34; + computerName); if (computer) { System.log(&#34;Found: &#34; + computer.name); return true; } else { System.log(&#34;No AD objects found.&#34;); return false; } } Adding it to the workflow Now I can pop back over to my massive Generate unique hostname workflow and drop in a new scriptable task between the check for VM name conflicts and return nextVmName tasks. It will bring in candidateVmName (String) as well as conflict (Boolean) as inputs, return conflict (Boolean) as an output, and errMsg (String) will be used for exception handling. If errMsg (String) is thrown, the flow will follow the dashed red line back to the conflict resolution action. I'm using this as a scriptable task so that I can do a little bit of processing before I call the action I created earlier - namely, if conflict (Boolean) was already set, the task should skip any further processing. That does mean that I'll need to call the action by both its module and name using System.getModule(&quot;net.bowdre.utility&quot;).checkForAdConflict(candidateVmName). So here's the full script:
// torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: check for AD conflict task // Inputs: candidateVmName (String), conflict (Boolean) // Outputs: conflict (Boolean) if (conflict) { System.log(&#34;Existing conflict found, skipping AD check...&#34;) } else { var result = System.getModule(&#34;net.bowdre.utility&#34;).checkForAdConflict(candidateVmName); // remember this returns &#39;true&#39; if a conflict is encountered if (result == true) { conflict = true; errMsg = &#34;Conflicting AD object found!&#34; System.warn(errMsg) throw(errMsg) } else { System.log(&#34;No AD conflict found for &#34; + candidateVmName) } } Cool, so that's the AD check in the bank. Onward to DNS!
DNS [Update] Thanks to a kind commenter↗, I've learned that my DNS-checking solution detailed below is somewhat unnecessarily complicated. I overlooked it at the time I was putting this together, but vRO does provide a System.resolveHostName() function to easily perform DNS lookups. I've updated the Adding it to the workflow section below with the simplified script which eliminates the need for building an external script with dependencies and importing that as a vRO action, but I'm going to leave those notes in place as well in case anyone else (or Future John) might need to leverage a similar approach to solve another issue.
Seriously. Go ahead and skip to here.
The Challenge (Deprecated) JavaScript can't talk directly to Active Directory on its own, but in the previous action I was able to leverage the AD plugin built into vRO to bridge that gap. Unfortunately there isn't I couldn't find a corresponding pre-installed plugin that will work as a DNS client. vRO 8 does introduce support for using other languages like (cross-platform) PowerShell or Python instead of being restricted to just JavaScript... but I wasn't able to find an easy solution for querying DNS from those languages either without requiring external modules. (The cross-platform version of PowerShell doesn't include handy Windows-centric cmdlets like Get-DnsServerResourceRecord.)
So I'll have to get creative here.
The Solution (Deprecated) Luckily, vRO does provide a way to import scripts bundled with their required modules, and I found the necessarily clues for doing that here↗. And I found a DNS client written for cross-platform PowerShell in the form of the DnsClient-PS↗ module. So I'll write a script locally, package it up with the DnsClient-PS module, and import it as a vRO action.
I start by creating a folder to store the script and needed module, and then I create the required handler.ps1 file.
mkdir checkDnsConflicts # [tl! .cmd:2] cd checkDnsConflicts touch handler.ps1 I then create a Modules folder and install the DnsClient-PS module:
mkdir Modules # [tl! .cmd:1] pwsh -c &#34;Save-Module -Name DnsClient-PS -Path ./Modules/ -Repository PSGallery&#34; And then it's time to write the PowerShell script in handler.ps1:
# torchlight! {&#34;lineNumbers&#34;: true} # PowerShell: checkForDnsConflict script # Inputs: $inputs.hostname (String), $inputs.domain (String) # Outputs: $queryresult (String) # # Returns true if a conflicting record is found in DNS. Import-Module DnsClient-PS function handler { Param($context, $inputs) $hostname = $inputs.hostname $domain = $inputs.domain $fqdn = $hostname + &#39;.&#39; + $domain Write-Host &#34;Querying DNS for $fqdn...&#34; $resolution = (Resolve-DNS $fqdn) If (-not $resolution.HasError) { Write-Host &#34;Record found:&#34; ($resolution | Select-Object -Expand Answers).ToString() $queryresult = &#34;true&#34; } Else { Write-Host &#34;No record found.&#34; $queryresult = &#34;false&#34; } return $queryresult } Now to package it up in a .zip which I can then import into vRO:
zip -r --exclude=\\*.zip -X checkDnsConflicts.zip . # [tl! .cmd] adding: Modules/ (stored 0%) # [tl! .nocopy:start] adding: Modules/DnsClient-PS/ (stored 0%) adding: Modules/DnsClient-PS/1.0.0/ (stored 0%) adding: Modules/DnsClient-PS/1.0.0/Public/ (stored 0%) adding: Modules/DnsClient-PS/1.0.0/Public/Set-DnsClientSetting.ps1 (deflated 67%) adding: Modules/DnsClient-PS/1.0.0/Public/Resolve-Dns.ps1 (deflated 67%) adding: Modules/DnsClient-PS/1.0.0/Public/Get-DnsClientSetting.ps1 (deflated 65%) adding: Modules/DnsClient-PS/1.0.0/lib/ (stored 0%) adding: Modules/DnsClient-PS/1.0.0/lib/DnsClient.1.3.1-netstandard2.0.xml (deflated 91%) adding: Modules/DnsClient-PS/1.0.0/lib/DnsClient.1.3.1-netstandard2.0.dll (deflated 57%) adding: Modules/DnsClient-PS/1.0.0/lib/System.Buffers.4.4.0-netstandard2.0.xml (deflated 72%) adding: Modules/DnsClient-PS/1.0.0/lib/System.Buffers.4.4.0-netstandard2.0.dll (deflated 44%) adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.psm1 (deflated 56%) adding: Modules/DnsClient-PS/1.0.0/Private/ (stored 0%) adding: Modules/DnsClient-PS/1.0.0/Private/Get-NameserverList.ps1 (deflated 68%) adding: Modules/DnsClient-PS/1.0.0/Private/Resolve-QueryOptions.ps1 (deflated 63%) adding: Modules/DnsClient-PS/1.0.0/Private/MockWrappers.ps1 (deflated 47%) adding: Modules/DnsClient-PS/1.0.0/PSGetModuleInfo.xml (deflated 73%) adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.Format.ps1xml (deflated 80%) adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.psd1 (deflated 59%) adding: handler.ps1 (deflated 49%) # [tl! .nocopy:end] ls # [tl! .cmd] checkDnsConflicts.zip handler.ps1 Modules # [tl! .nocopy] checkForDnsConflict action (Deprecated) And now I can go into vRO, create a new action called checkForDnsConflict inside my net.bowdre.utilities module. This time, I change the Language to PowerCLI 12 (PowerShell 7.0) and switch the Type to Zip to reveal the Import button. Clicking that button lets me browse to the file I need to import. I can also set up the two input variables that the script requires, hostname (String) and domain (String). Adding it to the workflow Just like with the check for AD conflict action, I'll add this onto the workflow as a scriptable task, this time between that action and the return nextVmName one. This will take candidateVmName (String), conflict (Boolean), and requestProperties (Properties) as inputs, and will return conflict (Boolean) as its sole output. The task will use errMsg (String) as its exception binding, which will divert flow via the dashed red line back to the conflict resolution task.
[Update] The below script has been altered to drop the unneeded call to my homemade checkForDnsConflict action and instead use the built-in System.resolveHostName(). Thanks @powertim!
// torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: check for DNS conflict // Inputs: candidateVmName (String), conflict (Boolean), requestProperties (Properties) // Outputs: conflict (Boolean) var domain = requestProperties.dnsDomain if (conflict) { System.log(&#34;Existing conflict found, skipping DNS check...&#34;) } else { if (System.resolveHostName(candidateVmName + &#34;.&#34; + domain)) { conflict = true; errMsg = &#34;Conflicting DNS record found!&#34; System.warn(errMsg) throw(errMsg) } else { System.log(&#34;No DNS conflict for &#34; + candidateVmName) } } Testing Once that's all in place, I kick off another deployment to make sure that everything works correctly. After it completes, I can navigate to the Extensibility &gt; Workflow runs section of the vRA interface to review the details: It worked!
But what if there had been conflicts? It's important to make sure that works too. I know that if I run that deployment again, the VM will get named DRE-DTST-XXX008 and then DRE-DTST-XXX009. So I'm going to force conflicts by creating an AD object for one and a DNS record for the other. And I'll kick off another deployment and see what happens. The workflow saw that the last VM was created as -007 so it first grabbed -008. It saw that -008 already existed in AD so incremented up to try -009. The workflow then found that a record for -009 was present in DNS so bumped it up to -010. That name finally passed through the checks and so the VM was deployed with the name DRE-DTST-XXX010. Success!
Next steps So now I've got a pretty capable workflow for controlled naming of my deployed VMs. The names conform with my established naming scheme and increment predictably in response to naming conflicts in vSphere, Active Directory, and DNS.
In the next post, I'll be enhancing my cloud template to let users pick which network to use for the deployed VM. That sounds simple, but I'll want the list of available networks to be filtered based on the selected site - that means using a Service Broker custom form to query another vRO action. I will also add the ability to create AD computer objects in a site-specific OU and automatically join the server to the domain. And I'll add notes to the VM to make it easier to remember why it was deployed.
Stay tuned!
`,description:"",url:"https://runtimeterror.dev/vra8-custom-provisioning-part-three/"},"https://runtimeterror.dev/vra8-custom-provisioning-part-two/":{title:"vRA8 Custom Provisioning: Part Two",tags:["vmware","vra","vro","javascript"],content:`We last left off this series after I'd set up vRA, performed a test deployment off of a minimal cloud template, and then enhanced the simple template to use vRA tags to let the user specify where a VM should be provisioned. But these VMs have kind of dumb names; right now, they're just getting named after the user who requests it + a random couple of digits, courtesy of a simple naming template defined on the project's Provisioning page↗: I could use this naming template to almost accomplish what I need from a naming solution, but I don't like that the numbers are random rather than an sequence (I want to deploy server001 followed by server002 rather than server343 followed by server718). And it's not enough for me that a VM's name be unique just within the scope of vRA - the hostname should be unique across my entire environment.
So I'm going to have to get my hands dirty and develop a new solution using vRealize Orchestrator. For right now, it should create a name for a VM that fits a defined naming schema, while also ensuring that the name doesn't already exist within vSphere. (I'll add checks against Active Directory and DNS in the next post.)
What's in a name? For my environment, servers should be named like BOW-DAPP-WEB001 where:
BOW indicates the site code. D describes the environment, Development in this case. APP designates the server's function; this one is an application server. WEB describes the primary application running on the server; this one hosts a web server. 001 is a three-digit sequential designation to differentiate between similar servers. So in vRA's custom naming template syntax, this could look something like:
\${site}-\${environment}\${function}-\${application}\${###} Okay, this plan is coming together.
Adding more inputs to the cloud template I'll start by adding those fields as inputs on my cloud template.
I already have a site input at the top of the template, used for selecting the deployment location. I'll leave that there:
# torchlight! {&#34;lineNumbers&#34;: true} inputs: site: type: string title: Site enum: - BOW - DRE I'll add the rest of the naming components below the prompts for image selection and size, starting with a dropdown of environments to pick from:
# torchlight! {&#34;lineNumbers&#34;: true} environment: type: string title: Environment enum: - Development - Testing - Production default: Development And a dropdown for those function options:
# torchlight! {&#34;lineNumbers&#34;: true} function: type: string title: Function Code oneOf: - title: Application (APP) const: APP - title: Desktop (DSK) const: DSK - title: Network (NET) const: NET - title: Service (SVS) const: SVS - title: Testing (TST) const: TST default: TST And finally a text entry field for the application descriptor. Note that this one includes the minLength and maxLength constraints to enforce the three-character format.
# torchlight! {&#34;lineNumbers&#34;: true} app: type: string title: Application Code minLength: 3 maxLength: 3 default: xxx We won't discuss what kind of content this server is going to host...
I then need to map these inputs to the resource entity at the bottom of the template so that they can be passed to vRO as custom properties. All of these are direct mappings except for environment since I only want the first letter. I use the substring() function to achieve that, but wrap it in a conditional so that it won't implode if the environment hasn't been picked yet. I'm also going to add in a dnsDomain property that will be useful later when I need to query for DNS conflicts.
# torchlight! {&#34;lineNumbers&#34;: true} resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine properties: image: &#39;\${input.image}&#39; flavor: &#39;\${input.size}&#39; site: &#39;\${input.site}&#39; environment: &#39;\${input.environment != &#34;&#34; ? substring(input.environment,0,1) : &#34;&#34;}&#39; function: &#39;\${input.function}&#39; app: &#39;\${input.app}&#39; dnsDomain: lab.bowdre.net So here's the complete template:
# torchlight! {&#34;lineNumbers&#34;: true} formatVersion: 1 inputs: site: type: string title: Site enum: - BOW - DRE image: type: string title: Operating System oneOf: - title: Windows Server 2019 const: ws2019 default: ws2019 size: title: Resource Size type: string oneOf: - title: &#39;Micro [1vCPU|1GB]&#39; const: micro - title: &#39;Tiny [1vCPU|2GB]&#39; const: tiny - title: &#39;Small [2vCPU|2GB]&#39; const: small default: small environment: type: string title: Environment enum: - Development - Testing - Production default: Development function: type: string title: Function Code oneOf: - title: Application (APP) const: APP - title: Desktop (DSK) const: DSK - title: Network (NET) const: NET - title: Service (SVS) const: SVS - title: Testing (TST) const: TST default: TST app: type: string title: Application Code minLength: 3 maxLength: 3 default: xxx resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine properties: image: &#39;\${input.image}&#39; flavor: &#39;\${input.size}&#39; site: &#39;\${input.site}&#39; environment: &#39;\${input.environment != &#34;&#34; ? substring(input.environment,0,1) : &#34;&#34;}&#39; function: &#39;\${input.function}&#39; app: &#39;\${input.app}&#39; dnsDomain: lab.bowdre.net networks: - network: &#39;\${resource.Cloud_vSphere_Network_1.id}&#39; assignment: static constraints: - tag: &#39;comp:\${to_lower(input.site)}&#39; Cloud_vSphere_Network_1: type: Cloud.vSphere.Network properties: networkType: existing constraints: - tag: &#39;net:\${to_lower(input.site)}&#39; Great! Here's what it looks like on the deployment request: ...but the deployed VM got named john-329. Why? Oh yeah, I need to create a thing that will take these naming elements, mash them together, check for any conflicts, and then apply the new name to the VM. vRealize Orchestrator, it's your time!
Setting up vRO config elements When I first started looking for a naming solution, I found a really handy blog post from Michael Poore↗ that described his solution to doing custom naming. I wound up following his general approach but had to adapt it a bit to make the code work in vRO 8 and to add in the additional checks I wanted. So credit to Michael for getting me pointed in the right direction!
I start by hopping over to the Orchestrator interface and navigating to the Configurations section. I'm going to create a new configuration folder named CustomProvisioning that will store all the Configuration Elements I'll use to configure my workflows on this project. Defining certain variables within configurations separates those from the workflows themselves, making the workflows much more portable. That will allow me to transfer the same code between multiple environments (like my homelab and my lab environment at work) without having to rewrite a bunch of hardcoded values.
Now I'll create a new configuration within the new folder. This will hold information about the naming schema so I name it namingSchema. In it, I create two strings to define the base naming format (up to the numbers on the end) and full name format (including the numbers). I define baseFormat and nameFormat as templates based on what I put together earlier. I also create another configuration named computerNames. When vRO picks a name for a VM, it will record it here as a number variable named after the &quot;base name&quot; (BOW-DAPP-WEB) and the last-used sequence as the value (001). This will make it quick-and-easy to see what the next VM should be named. For now, though, I just need the configuration to not be empty so I add a single variable named sample just to take up space. Okay, now it's time to get messy.
The vRO workflow Just like with the configuration elements, I create a new workflow folder named CustomProvisioning to keep all my workflows together. And then I make a VM Provisioning workflow that will be used for pre-provisioning tasks. On the Inputs/Outputs tab of the workflow, I create a single input named inputProperties of type Properties which will hold all the information about the deployment coming from the vRA side of things. Logging the input properties The first thing I'll want this workflow to do (particularly for testing) is to tell me about the input data from vRA. That will help to eliminate a lot of guesswork. I could just write a script within the workflow to do that, but creating it as a separate action will make it easier to reuse in other workflows. Behold, the logPayloadProperties action (nested within the net.bowdre.utility module which contains some spoilers for what else is to come!): This action has a single input, a Properties object named payload. (By the way, vRO is pretty particular about variable typing so going forward I'll reference variables as variableName (type).) Here's the JavaScript that will basically loop through each element and write the contents to the vRO debug log:
// torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: logPayloadProperties // Inputs: payload (Properties) // Outputs: none System.debug(&#34;==== Begin: vRA Event Broker Payload Properties ====&#34;); logAllProperties(payload,0); System.debug(&#34;==== End: vRA Event Broker Payload Properties ====&#34;); function logAllProperties(props,indent) { var keys = (props.keys).sort(); for each (var key in keys) { var prop = props.get(key); var type = System.getObjectType(prop); if (type == &#34;Properties&#34;) { logSingleProperty(key,prop,indent); logAllProperties(prop,indent+1); } else { logSingleProperty(key,prop,indent); } } } function logSingleProperty(name,value,i) { var prefix = &#34;&#34;; if (i &gt; 0) { var prefix = Array(i+1).join(&#34;-&#34;) + &#34; &#34;; } System.debug(prefix + name + &#34; :: &#34; + System.getObjectType(value) + &#34; :: &#34; + value); } Going back to my VM Provisioning workflow, I drag an Action Element onto the canvas and tie it to my new action, passing in inputProperties (Properties) as the input: Event Broker Subscription And at this point I save the workflow. I'm not finished with it - not by a long shot! - but this is a great place to get the workflow plumbed up to vRA and run a quick test. So I go to the vRA interface, hit up the Extensibility tab, and create a new subscription. I name it &quot;VM Provisioning&quot; and set it to fire on the &quot;Compute allocation&quot; event, which will happen right before the VM starts getting created. I link in my VM Provisioning workflow, and also set this as a blocking execution so that no other/future workflows will run until this one completes. Alrighty, let's test this and see if it works. I head back to the Design tab and kick off another deployment. I'm going to go grab some more coffee while this runs. And we're back! Now that the deployment completed successfully, I can go back to the Orchestrator view and check the Workflow Runs section to confirm that the VM Provisioning workflow did fire correctly. I can click on it to get more details, and the Logs tab will show me all the lovely data logged by the logPayloadProperties action running from the workflow. That information can also be seen on the Variables tab: A really handy thing about capturing the data this way is that I can use the Run Again or Debug button to execute the vRO workflow again without having to actually deploy a new VM from vRA. This will be great for testing as I press onward.
(Note that I can't actually edit the workflow directly from this workflow run; I have to go back into the workflow itself to edit it, but then I can re-run the run and it will execute the new code with the already-stored variables.)
Wrapper workflow I'm going to use this VM Provisioning workflow as a sort of top-level wrapper. This workflow will have a task to parse the payload and grab the variables that will be needed for naming a VM, and it will also have a task to actually rename the VM, but it's going to delegate the name generation to another nested workflow. Making the workflows somewhat modular will make it easier to make changes in the future if needed.
Anyway, I drop a Scriptable Task item onto the workflow canvas to handle parsing the payload - I'll call it parse payload - and pass it inputProperties (Properties) as its input. The script for this is pretty straight-forward:
// torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: parse payload // Inputs: inputProperties (Properties) // Outputs: requestProperties (Properties), originalNames (Array/string) var customProperties = inputProperties.customProperties || new Properties(); var requestProperties = new Properties(); requestProperties.site = customProperties.site; requestProperties.environment = customProperties.environment; requestProperties.function = customProperties.function; requestProperties.app = customProperties.app; requestProperties.dnsDomain = customProperties.dnsDomain; System.debug(&#34;requestProperties: &#34; + requestProperties) originalNames = inputProperties.resourceNames || new Array(); System.debug(&#34;Original names: &#34; + originalNames) It creates a new requestProperties (Properties) variable to store the limited set of properties that will be needed for naming - site, environment, function, and app. It also stores a copy of the original resourceNames (Array/string), which will be useful when we need to replace the old name with the new one. To make those two new variables accessible to other parts of the workflow, I'll need to also create the variables at the workflow level and map them as outputs of this task: I'll also drop in a &quot;Foreach Element&quot; item, which will run a linked workflow once for each item in an input array (originalNames (Array/string) in this case). I haven't actually created that nested workflow yet so I'm going to skip selecting that for now. The final step of this workflow will be to replace the existing contents of resourceNames (Array/string) with the new name.
I'll do that with another scriptable task element, named Apply new names, which takes inputProperties (Properties) and newNames (Array/string) as inputs. It will return resourceNames (Array/string) as a workflow output back to vRA. vRA will see that resourceNames has changed and it will update the name of the deployed resource (the VM) accordingly.
Binding a workflow output
To easily create a new workflow output and bind it to a task's output, click the task's Add New option like usual: Select Output at the top of the New Variable dialog and the complete the form with the other required details: And here's the script for that task:
// torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: Apply new names // Inputs: inputProperties (Properties), newNames (Array/string) // Outputs: resourceNames (Array/string) resourceNames = inputProperties.get(&#34;resourceNames&#34;); for (var i = 0; i &lt; newNames.length; i++) { System.log(&#34;Replacing resourceName &#39;&#34; + resourceNames[i] + &#34;&#39; with &#39;&#34; + newNames[i] + &#34;&#39;&#34;); resourceNames[i] = newNames[i]; } Now's a good time to save this workflow (ignoring the warning about it failing validation for now), and create a new workflow that the VM Provisioning workflow can call to actually generate unique hostnames.
Nested workflow I'm a creative person so I'm going to call this workflow &quot;Generate unique hostname&quot;. It's going to receive requestProperties (Properties) as its sole input, and will return nextVmName (String) as its sole output. I will also need to bind a couple of workflow variables to those configuration elements I created earlier. This is done by creating the variable as usual (baseFormat (string) in this case), toggling the &quot;Bind to configuration&quot; option, and then searching for the appropriate configuration. It's important to make sure the selected type matches that of the configuration element - otherwise it won't show up in the list. I do the same for the nameFormat (string) variable as well. Task: create lock Okay, on to the schema. This workflow may take a little while to execute, and it would be bad if another deployment came in while it was running - the two runs might both assign the same hostname without realizing it. Fortunately vRO has a locking system which can be used to avoid that. Accordingly, I'll add a scriptable task element to the canvas and call it create lock. It will have two inputs used for identifying the lock so that it can be easily removed later on, so I create a new variable lockOwner (string) with the value eventBroker and another named lockId (string) set to namingLock. The script is very short:
// torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: create lock // Inputs: lockOwner (String), lockId (String) // Outputs: none System.debug(&#34;Creating lock...&#34;) LockingSystem.lockAndWait(lockId, lockOwner) Task: generate hostnameBase We're getting to the meat of the operation now - another scriptable task named generate hostnameBase which will take the naming components from the deployment properties and stick them together in the form defined in the nameFormat (String) configuration. The inputs will be the existing nameFormat (String), requestProperties (Properties), and baseFormat (String) variables, and it will output new hostnameBase (String) (&quot;BOW-DAPP-WEB&quot;) and digitCount (Number) (&quot;3&quot;, one for each # in the format) variables. I'll also go ahead and initialize hostnameSeq (Number) to 0 to prepare for a later step. // torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: generate hostnameBase // Inputs: nameFormat (String), requestProperties (Properties), baseFormat (String) // Outputs: hostnameBase (String), digitCount (Number), hostnameSeq (Number) hostnameBase = baseFormat; digitCount = nameFormat.match(/(#)/g).length; hostnameSeq = 0; // Get request keys and drop them into the template for each (var key in requestProperties.keys) { var propValue = requestProperties.get(key); hostnameBase = hostnameBase.replace(&#34;{{&#34; + key + &#34;}}&#34;, propValue); } // Remove leading/trailing special characters from hostname base hostnameBase = hostnameBase.toUpperCase(); hostnameBase = hostnameBase.replace(/([-_]$)/g, &#34;&#34;); hostnameBase = hostnameBase.replace(/^([-_])/g, &#34;&#34;); System.debug(&#34;Hostname base: &#34; + hostnameBase) Interlude: connecting vRO to vCenter Coming up, I'm going to want to connect to vCenter so I can find out if there are any existing VMs with a similar name. I'll use the vSphere vCenter Plug-in which is included with vRO to facilitate that, but that means I'll first need to set up that connection. So I'll save the workflow I've been working on (save early, save often) and then go run the preloaded &quot;Add a vCenter Server instance&quot; workflow. The first page of required inputs is pretty self-explanatory: On the connection properties page, I unchecked the per-user connection in favor of using a single service account, the same one that I'm already using for vRA's connection to vCenter. After successful completion of the workflow, I can go to Administration &gt; Inventory and confirm that the new endpoint is there: I've only got the one vCenter in my lab. At work, I've got multiple vCenters so I would need to repeat these steps to add each of them as an endpoint.
Task: prepare vCenter SDK connection Anyway, back to my &quot;Generate unique hostname&quot; workflow, where I'll add another scriptable task to prepare the vCenter SDK connection. This one doesn't require any inputs, but will output an array of VC:SdkConnection objects: // torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: prepare vCenter SDK connection // Inputs: none // Outputs: sdkConnections (Array/VC:SdkConnection) sdkConnections = VcPlugin.allSdkConnections System.log(&#34;Preparing vCenter SDK connection...&#34;) ForEach element: search VMs by name Next, I'm going to drop another ForEach element onto the canvas. For each vCenter endpoint in sdkConnections (Array/VC:SdkConnection), it will execute the workflow titled &quot;Get virtual machines by name with PC&quot;. I map the required vc input to *sdkConnections (VC:SdkConnection), filter to hostnameBase (String), and skip rootVmFolder since I don't care where a VM resides. And I create a new vmsByHost (Array/Array) variable to hold the output. Task: unpack results for all hosts That vmsByHost (Array/array) object contains any and all VMs which match hostnameBase (String), but they're broken down by the host they're running on. So I use a scriptable task to convert that array-of-arrays into a new array-of-strings containing just the VM names. // torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: unpack results for all hosts // Inputs: vmsByHost (Array/Array) // Outputs: vmNames (Array/string) var vms = new Array(); vmNames = new Array(); for (host in vmsByHost) { var a = vmsByHost[host] for (vm in a) { vms.push(a[vm]) } } vmNames = vms.map(function(i) {return (i.displayName).toUpperCase()}) Task: generate hostnameSeq &amp; candidateVmName This scriptable task will check the computerNames configuration element we created earlier to see if we've already named a VM starting with hostnameBase (String). If such a name exists, we'll increment the number at the end by one, and return that as a new hostnameSeq (Number) variable; if it's the first of its kind, hostnameSeq (Number) will be set to 1. And then we'll combine hostnameBase (String) and hostnameSeq (Number) to create the new candidateVmName (String). If things don't work out, this script will throw errMsg (String) so I need to add that as an output exception binding as well. // torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: generate hostnameSeq &amp; candidateVmName // Inputs: hostnameBase (String), digitCount (Number) // Outputs: hostnameSeq (Number), computerNames (ConfigurationElement), candidateVmName (String) // Get computerNames configurationElement, which lives in the &#39;CustomProvisioning&#39; folder // Specify a different path if the CE lives somewhere else var category = Server.getConfigurationElementCategoryWithPath(&#34;CustomProvisioning&#34;) var elements = category.configurationElements for (var i in elements) { if (elements[i].name == &#34;computerNames&#34;) { computerNames = elements[i] } } // Lookup hostnameBase and increment sequence value try { var attribute = computerNames.getAttributeWithKey(hostnameBase); hostnameSeq = attribute.value; System.debug(&#34;Found &#34; + attribute.name + &#34; with sequence &#34; + attribute.value) } catch (e) { System.debug(&#34;Hostname base &#34; + hostnameBase + &#34; does not exist, it will be created.&#34;) } finally { hostnameSeq++; if (hostnameSeq.toString().length &gt; digitCount) { errMsg = &#39;All out of potential VM names, aborting...&#39;; throw(errMsg); } System.debug(&#34;Adding &#34; + hostnameBase + &#34; with sequence &#34; + hostnameSeq) computerNames.setAttributeWithKey(hostnameBase, hostnameSeq) } // Convert number to string and add leading zeroes var hostnameNum = hostnameSeq.toString(); var leadingZeroes = new Array(digitCount - hostnameNum.length + 1).join(&#34;0&#34;); hostnameNum = leadingZeroes + hostnameNum; // Produce our candidate VM name candidateVmName = hostnameBase + hostnameNum; candidateVmName = candidateVmName.toUpperCase(); System.log(&#34;Proposed VM name: &#34; + candidateVmName) Task: check for VM name conflicts Now that I know what I'd like to try to name this new VM, it's time to start checking for any potential conflicts. So this task will compare my candidateVmName (String) against the existing vmNames (Array/string) to see if there are any collisions. If there's a match, it will set a new variable called conflict (Boolean) to true and also report the issue through the errMsg (String) output exception binding. Otherwise it will move on to the next check. // torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: check for VM name conflicts // Inputs: candidateVmName (String), vmNames (Array/string) // Outputs: conflict (Boolean) for (i in vmNames) { if (vmNames[i] == candidateVmName) { conflict = true; errMsg = &#34;Found a conflicting VM name!&#34; System.warn(errMsg) throw(errMsg) } } System.log(&#34;No VM name conflicts found for &#34; + candidateVmName) Conflict resolution So what happens if there is a naming conflict? This solution wouldn't be very flexible if it just gave up as soon as it encountered a problem. Fortunately, I planned for this - all I need to do in the event of a conflict is to run the generate hostnameSeq &amp; candidateVmName task again to increment hostnameSeq (Number) by one, use that to create a new candidateVmName (String), and then continue on with the checks.
So far, all of the workflow elements have been connected with happy blue lines which show the flow when everything is going according to the plan. Remember that errMsg (String) from the last task? When that gets thrown, the flow will switch to follow an angry dashed red line (if there is one). After dropping a new scriptable task onto the canvas, I can click on the blue line connecting it to the previous item and then click the red X to make it go away. I can then drag the new element away from the &quot;everything is fine&quot; flow, and connect it to the check for VM name conflict element with that angry dashed red line. Once conflict resolution completes (successfully), a happy blue line will direct the flow back to generate hostnameSeq &amp; candidateVmName so that the sequence can be incremented and the checks performed again. And finally, a blue line will connect the check for VM name conflict task's successful completion to the end of the workflow: All this task really does is clear the conflict (Boolean) flag so that's the only output.
// torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: conflict resolution // Inputs: none // Outputs: conflict (Boolean) System.log(&#34;Conflict encountered, trying a new name...&#34;) conflict = false; So if check VM name conflict encounters a collision with an existing VM name it will set conflict (Boolean) = true; and throw errMsg (String), which will divert the flow to the conflict resolution task. That task will clear the conflict (Boolean) flag and return flow to generate hostnameSeq &amp; candidateVmName, which will attempt to increment hostnameSeq (Number). Not that this task doesn't have a dashed red line escape route; if it needs to throw errMsg (String) because of exhausting the number pool it will abort the workflow entirely.
Task: return nextVmName Assuming that everything has gone according to plan and the workflow has avoided any naming conflicts, it will need to return nextVmName (String) back to the VM Provisioning workflow. That's as simple as setting it to the last value of candidateVmName (String): // torchlight! {&#34;lineNumbers&#34;: true} // JavaScript: return nextVmName // Inputs: candidateVmName (String) // Outputs: nextVmName (String) nextVmName = candidateVmName; System.log(&#34; ***** Selecting [&#34; + nextVmName + &#34;] as the next VM name ***** &#34;) Task: remove lock And we should also remove that lock that we created at the start of this workflow. // torchlight! {&#34;lineNumbers&#34;: true} // JavaScript remove lock // Inputs: lockId (String), lockOwner (String) // Outputs: none System.debug(&#34;Releasing lock...&#34;) LockingSystem.unlock(lockId, lockOwner) Done! Well, mostly. Right now the workflow only actually releases the lock if it completes successfully. Which brings me to:
Default error handler I can use a default error handler to capture an abort due to running out of possible names, release the lock (with an exact copy of the remove lock task), and return (failed) control back to the parent workflow. Because the error handler will only fire when the workflow has failed catastrophically, I'll want to make sure the parent workflow knows about it. So I'll set the end mode to &quot;Error, throw an exception&quot; and bind it to that errMsg (String) variable to communicate the problem back to the parent. Finalizing the VM Provisioning workflow When I had dropped the foreach workflow item into the VM Provisioning workflow earlier, I hadn't configured anything but the name. Now that the nested workflow is complete, I need to fill in the blanks: So for each item in originalNames (Array/string), this will run the workflow named Generate unique hostname. The input to the workflow will be requestProperties (Properties), and the output will be newNames (Array/string).
Putting it all together now Hokay, so. I've got configuration elements which hold the template for how I want servers to be named and also track which names have been used. My cloud template asks the user to input certain details which will be used to create a useful computer name. And I've added an extensibility subscription in Cloud Assembly which will call this vRealize Orchestrator workflow before the VM gets created: This workflow first logs all the properties obtained from the vRA side of things, then parses the properties to grab the necessary details. It then passes that information to a nested workflow actually generate the hostname. Once it gets a result, it updates the deployment properties with the new name so that vRA can configure the VM accordingly.
The nested workflow is a bit more complicated: It first creates a lock to ensure there won't be multiple instances of this workflow running simultaneously, and then processes data coming from the &quot;parent&quot; workflow to extract the details needed for this workflow. It smashes together the naming elements (site, environment, function, etc) to create a naming base, then connects to each defined vCenter to compile a list of any VMs with the same base. The workflow then consults a configuration element to see which (if any) similar names have already been used, and generates a suggested VM name based on that. It then consults the existing VM list to see if there might be any collisions; if so, it flags the conflict and loops back to generate a new name and try again. Once the conflicts are all cleared, the suggested VM name is made official and returned back up to the VM Provisioning workflow.
Cool. But does it actually work?
Testing Remember how I had tested my initial workflow just to see which variables got captured? Now that the workflow has a bit more content, I can just re-run it without having to actually provision a new VM. After doing so, the logging view reports that it worked! I can also revisit the computerNames configuration element to see the new name reflected there: If I run the workflow again, I should see DRE-DTST-XXX002 assigned, and I do! And, finally, I can go back to vRA and request a new VM and confirm that the name gets correctly applied to the VM. It's so beautiful!
Wrap-up At this point, I'm tired of typing and I'm sure you're tired of reading. In the next installment, I'll go over how I modify this workflow to also check for naming conflicts in Active Directory and DNS. That sounds like it should be pretty simple but, well, you'll see.
See you then!
`,description:"",url:"https://runtimeterror.dev/vra8-custom-provisioning-part-two/"},"https://runtimeterror.dev/vra8-custom-provisioning-part-one/":{title:"vRA8 Custom Provisioning: Part One",tags:["vmware","vra"],content:`I recently shared some details about my little self-contained VMware homelab as well as how I integrated {php}IPAM into vRealize Automation 8 for assigning IPs to deployed VMs. For my next trick, I'll be crafting a flexible Cloud Template and accompanying vRealize Orchestrator workflow that will help to deploy and configure virtual machines based on a vRA user's input. Buckle up, this is going to be A Ride.
Objectives Before getting into the how it would be good to start with the what - what exactly are we hoping to accomplish here? For my use case, I'll need a solution which can:
use a single Cloud Template to provision Windows VMs to one of several designated compute resources across multiple sites. generate a unique VM name which matches a defined naming standard. allow the requester to specify which site-specific network should be used, and leverage {php}IPAM to assign a static IP. pre-stage a computer account in Active Directory in a site-specific Organizational Unit and automatically join the new computer to the domain. create a static record in Microsoft DNS for non-domain systems. expand the VM's virtual disk and extend the volume inside the guest based on request input. add specified domain accounts to the guest's local Administrators group based on request input. annotate the VM in vCenter with a note to describe the server's purpose and custom attributes to identify the responsible party and originating ticket number. Looking back, that's kind of a lot. I can see why I've been working on this for months!
vSphere setup In production, I'll want to be able to deploy to different computer clusters spanning multiple vCenters. That's a bit difficult to do on a single physical server, but I still wanted to be able to simulate that sort of dynamic resource selection. So for development and testing in my lab, I'll be using two sites - BOW and DRE. I ditched the complicated &quot;just because I can&quot; vSAN I'd built previously and instead spun up two single-host nested clusters, one for each of my sites: Those hosts have one virtual NIC each on a standard switch connected to my home network, and a second NIC each connected to the &quot;isolated&quot; internal lab network with all the VLANs for the guests to run on: vRA setup On the vRA side of things, I logged in to the Cloud Assembly portion and went to the Infrastructure tab. I first created a Project named LAB, added the vCenter as a Cloud Account, and then created a Cloud Zone for the vCenter as well. On the Compute tab of the Cloud Zone properties, I manually added both the BOW and DRE clusters. I also created a Network Profile and added each of the nested dvPortGroups I had created for this purpose. Each network also gets associated with the related IP Range which was imported from {php}IPAM. Since each of my hosts only has 100GB of datastore and my Windows template specifies a 60GB VMDK, I went ahead and created a Storage Profile so that deployments would default to being Thin Provisioned. I created a few Flavor Mappings ranging from micro (1vCPU|1GB RAM) to giant (8vCPU|16GB) but for this resource-constrained lab I'll stick mostly to the micro, tiny (1vCPU|2GB), and small (2vCPU|2GB) sizes. And I created an Image Mapping named ws2019 which points to a Windows Server 2019 Core template I have stored in my lab's Content Library (cleverly-named &quot;LABrary&quot; for my own amusement). And with that, my vRA infrastructure is ready for testing a very basic deployment.
My First Cloud Template Now it's time to leave the Infrastructure tab and visit the Design one, where I'll create a new Cloud Template (what previous versions of vRA called &quot;Blueprints&quot;). I start by dragging one each of the vSphere &gt; Machine and vSphere &gt; Network entities onto the workspace. I then pop over to the Code tab on the right to throw together some simple YAML statements: VMware's got a pretty great document↗ describing the syntax for these input properties, plus a lot of it is kind of self-explanatory. Let's step through this real quick:
# torchlight! {&#34;lineNumbers&#34;: true} formatVersion: 1 inputs: # Image Mapping image: type: string title: Operating System oneOf: - title: Windows Server 2019 const: ws2019 default: ws2019 formatVersion is always gonna be 1 so we'll skip right past that.
The first input is going to ask the user to select the desired Operating System for this deployment. The oneOf type will be presented as a dropdown (with only one option in this case, but I'll leave it this way for future flexibility); the user will see the friendly &quot;Windows Server 2019&quot; title which is tied to the ws2019 const value. For now, I'll also set the default value of the field so I don't have to actually click the dropdown each time I test the deployment.
# torchlight! {&#34;lineNumbers&#34;: true} # Flavor Mapping size: title: Resource Size type: string oneOf: - title: &#39;Micro [1vCPU|1GB]&#39; const: micro - title: &#39;Tiny [1vCPU|2GB]&#39; const: tiny - title: &#39;Small [2vCPU|2GB]&#39; const: small default: small Now I'm asking the user to pick the t-shirt size of the VM. These will correspond to the Flavor Mappings I defined earlier. I again chose to use a oneOf data type so that I can show the user more information for each option than is embedded in the name. And I'm setting a default value to avoid unnecessary clicking.
The resources section is where the data from the inputs gets applied to the deployment:
# torchlight! {&#34;lineNumbers&#34;: true} resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine properties: image: &#39;\${input.image}&#39; flavor: &#39;\${input.size}&#39; networks: - network: &#39;\${resource.Cloud_vSphere_Network_1.id}&#39; assignment: static Cloud_vSphere_Network_1: type: Cloud.vSphere.Network properties: networkType: existing So I'm connecting the selected input.image to the Image Mapping configured in vRA, and the selected input.size goes back to the Flavor Mapping that will be used for the deployment. I also specify that Cloud_vSphere_Machine_1 should be connected to Cloud_vSphere_Network_1 and that it should use a static (as opposed to dynamic) IP address. Finally, vRA is told that the Cloud_vSphere_Network_1 should be an existing vSphere network.
All together now:
# torchlight! {&#34;lineNumbers&#34;: true} formatVersion: 1 inputs: # Image Mapping image: type: string title: Operating System oneOf: - title: Windows Server 2019 const: ws2019 default: ws2019 # Flavor Mapping size: title: Resource Size type: string oneOf: - title: &#39;Micro [1vCPU|1GB]&#39; const: micro - title: &#39;Tiny [1vCPU|2GB]&#39; const: tiny - title: &#39;Small [2vCPU|2GB]&#39; const: small default: small resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine properties: image: &#39;\${input.image}&#39; flavor: &#39;\${input.size}&#39; networks: - network: &#39;\${resource.Cloud_vSphere_Network_1.id}&#39; assignment: static Cloud_vSphere_Network_1: type: Cloud.vSphere.Network properties: networkType: existing Cool! But does it work? Hitting the Test button at the bottom right is a great way to validate a template before actually running a deployment. That will confirm that the template syntax, infrastructure, and IPAM configuration is all set up correctly to support this particular deployment. Looks good! I like to click on the Provisioning Diagram link to see a bit more detail about where components were placed and why. That's also an immensely helpful troubleshooting option if the test isn't successful. And finally, I can hit that Deploy button to actually spin up this VM. Each deployment has to have a unique deployment name. I got tired of trying to keep up with what names I had already used so kind of settled on a [DATE]_[TIME] format for my test deployments. I'll automatic this tedious step away in the future.
I then confirm that the (automatically-selected default) inputs are correct and kick it off. The deployment will take a few minutes. I like to click over to the History tab to see a bit more detail as things progress. It doesn't take too long for activity to show up on the vSphere side of things: And there's the completed VM - notice the statically-applied IP address courtesy of {php}IPAM! And I can pop over to the IPAM interface to confirm that the IP has been marked as reserved as well: Fantastic! But one of my objectives from earlier was to let the user control where a VM gets provisioned. Fortunately it's pretty easy to implement thanks to vRA 8's use of tags.
Using tags for resource placement Just about every entity within vRA 8 can have tags applied to it, and you can leverage those tags in some pretty creative and useful ways. For now, I'll start by applying tags to my compute resources; I'll use comp:bow for the &quot;BOW Cluster&quot; and comp:dre for the &quot;DRE Cluster&quot;. I'll also use the net:bow and net:dre tags to logically divide up the networks between my sites: I can now add an input to the Cloud Template so the user can pick which site they need to deploy to:
# torchlight! {&#34;lineNumbers&#34;: true} inputs: # Datacenter location site: type: string title: Site enum: - BOW - DRE # Image Mapping I'm using the enum option now instead of oneOf since the site names shouldn't require further explanation.
And then I'll add some constraints to the resources section, making use of the to_lower function from the cloud template expression syntax↗ to automatically convert the selected site name from all-caps to lowercase so it matches the appropriate tag:
# torchlight! {&#34;lineNumbers&#34;: true} resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine properties: image: &#39;\${input.image}&#39; flavor: &#39;\${input.size}&#39; networks: - network: &#39;\${resource.Cloud_vSphere_Network_1.id}&#39; assignment: static constraints: - tag: &#39;comp:\${to_lower(input.site)}&#39; Cloud_vSphere_Network_1: type: Cloud.vSphere.Network properties: networkType: existing constraints: - tag: &#39;net:\${to_lower(input.site)}&#39; So the VM will now only be deployed to the compute resource and networks which are tagged to match the selected Site identifier. I ran another test to make sure I didn't break anything: It came back successful, so I clicked through to see the provisioning diagram. On the network tab, I see that only the last two networks (d1650-Servers-4 and d1660-Servers-5) were considered since the first three didn't match the required net:dre tag: And it's a similar story on the compute tab: As a final test for this change, I kicked off one deployment to each site to make sure things worked as expected. Nice!
Conclusion This was kind of an easy introduction into what I've been doing with vRA 8 these past several months. The basic infrastructure (both in vSphere and vRA) will support far more interesting and flexible deployments as I dig deeper. For now, being able to leverage vRA tags for placing workloads on specific compute resources is a great start.
Things will get much more interesting in the next post, where I'll dig into how I'm using vRealize Orchestrator to generate unique computer names which fit a defined naming standard.
`,description:"",url:"https://runtimeterror.dev/vra8-custom-provisioning-part-one/"},"https://runtimeterror.dev/integrating-phpipam-with-vrealize-automation-8/":{title:"Integrating {php}IPAM with vRealize Automation 8",tags:["python","rest","vmware","vra","networking"],content:`In a previous post, I described some of the steps I took to stand up a homelab including vRealize Automation (vRA) on an Intel NUC 9. One of my initial goals for that lab was to use it for developing and testing a way for vRA to leverage phpIPAM↗ for static IP assignments. The homelab worked brilliantly for that purpose, and those extra internal networks were a big help when it came to testing. I was able to deploy and configure a new VM to host the phpIPAM instance, install the VMware vRealize Third-Party IPAM SDK↗ on my Chromebook's Linux environment, develop and build the integration component, import it to my vRA environment, and verify that deployments got addressed accordingly.
The resulting integration is available on Github here↗. This was actually the second integration I'd worked on, having fumbled my way through a Solarwinds integration↗ earlier last year. VMware's documentation↗ on how to build these things is pretty good, but I struggled to find practical information on how a novice like me could actually go about developing the integration. So maybe these notes will be helpful to anyone seeking to write an integration for a different third-party IP Address Management solution.
If you'd just like to import a working phpIPAM integration into your environment without learning how the sausage is made, you can grab my latest compiled package here↗. You'll probably still want to look through Steps 0-2 to make sure your IPAM instance is set up similarly to mine.
Step 0: phpIPAM installation and base configuration Before even worrying about the SDK, I needed to get a phpIPAM instance ready↗. I started with a small (1vCPU/1GB RAM/16GB HDD) VM attached to my &quot;Home&quot; network (192.168.1.0/24). I installed Ubuntu 20.04.1 LTS, and then used this guide↗ to install phpIPAM.
Once phpIPAM was running and accessible via the web interface, I then used openssl to generate a self-signed certificate to be used for the SSL API connection:
sudo mkdir /etc/apache2/certificate # [tl! .cmd:2] cd /etc/apache2/certificate/ sudo openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out apache-certificate.crt -keyout apache.key I edited the apache config file to bind that new certificate on port 443, and to redirect requests on port 80 to port 443:
# torchlight! {&#34;lineNumbers&#34;: true} &lt;VirtualHost *:80&gt; ServerName ipam.lab.bowdre.net Redirect permanent / https://ipam.lab.bowdre.net &lt;/VirtualHost&gt; &lt;VirtualHost *:443&gt; DocumentRoot &#34;/var/www/html/phpipam&#34; ServerName ipam.lab.bowdre.net &lt;Directory &#34;/var/www/html/phpipam&#34;&gt; Options Indexes FollowSymLinks AllowOverride All Require all granted &lt;/Directory&gt; ErrorLog &#34;/var/log/apache2/phpipam-error_log&#34; CustomLog &#34;/var/log/apache2/phpipam-access_log&#34; combined SSLEngine on SSLCertificateFile /etc/apache2/certificate/apache-certificate.crt SSLCertificateKeyFile /etc/apache2/certificate/apache.key &lt;/VirtualHost&gt; After restarting apache, I verified that hitting http://ipam.lab.bowdre.net redirected me to https://ipam.lab.bowdre.net, and that the connection was secured with the shiny new certificate.
Remember how I've got a &quot;Home&quot; network as well as several internal networks which only exist inside the lab environment? I dropped the phpIPAM instance on the Home network to make it easy to connect to, but it doesn't know how to talk to the internal networks where vRA will actually be deploying the VMs. So I added a static route to let it know that traffic to 172.16.0.0/16 would have to go through the Vyos router at 192.168.1.100.
This is Ubuntu, so I edited /etc/netplan/99-netcfg-vmware.yaml to add the routes section at the bottom:
# torchlight! {&#34;lineNumbers&#34;: true} # /etc/netplan/99-netcfg-vmware.yaml network: version: 2 renderer: networkd ethernets: ens160: dhcp4: no dhcp6: no addresses: - 192.168.1.14/24 gateway4: 192.168.1.1 nameservers: search: - lab.bowdre.net addresses: - 192.168.1.5 routes: # [tl! focus:3] - to: 172.16.0.0/16 via: 192.168.1.100 metric: 100 I then ran sudo netplan apply so the change would take immediate effect and confirmed the route was working by pinging the vCenter's interface on the 172.16.10.0/24 network:
sudo netplan apply # [tl! .cmd] ip route # [tl! .cmd] default via 192.168.1.1 dev ens160 proto static # [tl! .nocopy:3] 172.16.0.0/16 via 192.168.1.100 dev ens160 proto static metric 100 192.168.1.0/24 dev ens160 proto kernel scope link src 192.168.1.14 ping 172.16.10.12 # [tl! .cmd] PING 172.16.10.12 (172.16.10.12) 56(84) bytes of data. # [tl! .nocopy:7] 64 bytes from 172.16.10.12: icmp_seq=1 ttl=64 time=0.282 ms 64 bytes from 172.16.10.12: icmp_seq=2 ttl=64 time=0.256 ms 64 bytes from 172.16.10.12: icmp_seq=3 ttl=64 time=0.241 ms ^C --- 172.16.10.12 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2043ms rtt min/avg/max/mdev = 0.241/0.259/0.282/0.016 ms Now would also be a good time to go ahead and enable cron jobs so that phpIPAM will automatically scan its defined subnets for changes in IP availability and device status. phpIPAM includes a pair of scripts in INSTALL_DIR/functions/scripts/: one for discovering new hosts, and the other for checking the status of previously discovered hosts. So I ran sudo crontab -e to edit root's crontab and pasted in these two lines to call both scripts every 15 minutes:
*/15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/discoveryCheck.php */15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/pingCheck.php Step 1: Configuring phpIPAM API access Okay, let's now move on to the phpIPAM web-based UI to continue the setup. After logging in at https://ipam.lab.bowdre.net/, I clicked on the red Administration menu at the right side and selected phpIPAM Settings. Under the Site Settings section, I enabled the Prettify links option, and under the Feature Settings section I toggled on the API component. I then hit Save at the bottom of the page to apply the changes.
Next, I went to the Users item on the left-hand menu to create a new user account which will be used by vRA. I named it vra, set a password for the account, and made it a member of the Operators group, but didn't grant any special module access. The last step in configuring API access is to create an API key. This is done by clicking the API item on that left side menu and then selecting Create API key. I gave it the app ID vra, granted Read/Write permissions, and set the App Security option to &quot;SSL with User token&quot;. Once we get things going, our API calls will authenticate with the username and password to get a token and bind that to the app ID.
Step 2: Configuring phpIPAM subnets Our fancy new IPAM solution is ready to go - except for the whole bit about managing IPs. We need to tell it about the network segments we'd like it to manage. phpIPAM uses &quot;Sections&quot; to group subnets together, so we start by creating a new Section at Administration &gt; IP related management &gt; Sections. I named my new section Lab, and pretty much left all the default options. Be sure that the Operators group has read/write access to this section and the subnets we're going to create inside it! We should also go ahead and create a Nameserver set so that phpIPAM will be able to tell its clients (vRA) what server(s) to use for DNS. Do this at Administration &gt; IP related management &gt; Nameservers. I created a new entry called Lab and pointed it at my internal DNS server, 192.168.1.5. Okay, we're finally ready to start entering our subnets at Administration &gt; IP related management &gt; Subnets. For each one, I entered the Subnet in CIDR format, gave it a useful description, and associated it with my Lab section. I expanded the VLAN dropdown and used the Add new VLAN option to enter the corresponding VLAN information, and also selected the Nameserver I had just created. I also enabled the options Mark as pool, Check hosts status, Discover new hosts, and Resolve DNS names. Update
Since releasing this integration, I've learned that phpIPAM intends for the isPool field to identify networks where the entire range (including the subnet and broadcast addresses) are available for assignment. As a result, I no longer recommend using that field. Instead, consider creating a custom field↗ for tagging networks for vRA availability.
I then used the Scan subnets for new hosts button to run a discovery scan against the new subnet. The scan only found a single host, 172.16.20.1, which is the subnet's gateway address hosted by the Vyos router. I used the pencil icon to edit the IP and mark it as the gateway: phpIPAM now knows the network address, mask, gateway, VLAN, and DNS configuration for this subnet - all things that will be useful for clients seeking an address. I then repeated these steps for the remaining subnets. Now for the real fun!
Step 3: Testing the API Before moving on to developing the integration, it would be good to first get a little bit familiar with the phpIPAM API - and, in the process, validate that everything is set up correctly. First I read through the API documentation↗ and some example API calls↗ to get a feel for it. I then started by firing up a python3 interpreter and defining a few variables as well as importing the requests module for interacting with the REST API:
&gt;&gt;&gt; username = &#39;vra&#39; &gt;&gt;&gt; password = &#39;passw0rd&#39; &gt;&gt;&gt; hostname = &#39;ipam.lab.bowdre.net&#39; &gt;&gt;&gt; apiAppId = &#39;vra&#39; &gt;&gt;&gt; uri = f&#39;https://{hostname}/api/{apiAppId}/&#39; &gt;&gt;&gt; auth = (username, password) &gt;&gt;&gt; import requests Based on reading the API docs, I'll need to use the username and password for initial authentication which will provide me with a token to use for subsequent calls. So I'll construct the URI used for auth, submit a POST to authenticate, verify that the authentication was successful (status_code == 200), and take a look at the response to confirm that I got a token. (For testing, I'm calling requests with verify=False; we'll be able to use certificate verification when these calls are made from vRA.)
&gt;&gt;&gt; auth_uri = f&#39;{uri}/user/&#39; &gt;&gt;&gt; req = requests.post(auth_uri, auth=auth, verify=False) /usr/lib/python3/dist-packages/urllib3/connectionpool.py:849: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) &gt;&gt;&gt; req.status_code 200 &gt;&gt;&gt; req.json() {&#39;code&#39;: 200, &#39;success&#39;: True, &#39;data&#39;: {&#39;token&#39;: &#39;Q66bVm8FTpnmBEJYhl5I4ITp&#39;, &#39;expires&#39;: &#39;2021-02-22 00:52:35&#39;}, &#39;time&#39;: 0.01} Sweet! There's our token! Let's save it to token to make it easier to work with:
&gt;&gt;&gt; token = {&#34;token&#34;: req.json()[&#39;data&#39;][&#39;token&#39;]} &gt;&gt;&gt; token {&#39;token&#39;: &#39;Q66bVm8FTpnmBEJYhl5I4ITp&#39;} Let's see if we can use our new token against the subnets controller to get a list of subnets known to phpIPAM:
subnet_uri = f&#39;{uri}/subnets/&#39; &gt;&gt;&gt; subnets = requests.get(subnet_uri, headers=token, verify=False) /usr/lib/python3/dist-packages/urllib3/connectionpool.py:849: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) &gt;&gt;&gt; req.status_code 200 &gt;&gt;&gt; subnets.json() {&#39;code&#39;: 200, &#39;success&#39;: True, &#39;data&#39;: [{&#39;id&#39;: &#39;7&#39;, &#39;subnet&#39;: &#39;192.168.1.0&#39;, &#39;mask&#39;: &#39;24&#39;, &#39;sectionId&#39;: &#39;1&#39;, &#39;description&#39;: &#39;Home Network&#39;, &#39;linked_subnet&#39;: None, &#39;firewallAddressObject&#39;: None, &#39;vrfId&#39;: None, &#39;masterSubnetId&#39;: &#39;0&#39;, &#39;allowRequests&#39;: &#39;0&#39;, &#39;vlanId&#39;: None, &#39;showName&#39;: &#39;0&#39;, &#39;device&#39;: None, &#39;permissions&#39;: [{&#39;group_id&#39;: 3, &#39;permission&#39;: &#39;1&#39;, &#39;name&#39;: &#39;Guests&#39;, &#39;desc&#39;: &#39;default Guest group (viewers)&#39;, &#39;members&#39;: False}, {&#39;group_id&#39;: 2, &#39;permission&#39;: &#39;2&#39;, &#39;name&#39;: &#39;Operators&#39;, &#39;desc&#39;: &#39;default Operator group&#39;, &#39;members&#39;: [{&#39;username&#39;: &#39;vra&#39;}]}], &#39;pingSubnet&#39;: &#39;1&#39;, &#39;discoverSubnet&#39;: &#39;1&#39;, &#39;resolveDNS&#39;: &#39;1&#39;, &#39;DNSrecursive&#39;: &#39;0&#39;, &#39;DNSrecords&#39;: &#39;0&#39;, &#39;nameserverId&#39;: &#39;1&#39;, &#39;scanAgent&#39;: &#39;1&#39;, &#39;customer_id&#39;: None, &#39;isFolder&#39;: &#39;0&#39;, &#39;isFull&#39;: &#39;0&#39;, &#39;isPool&#39;: &#39;0&#39;, &#39;tag&#39;: &#39;2&#39;, &#39;threshold&#39;: &#39;0&#39;, &#39;location&#39;: [], &#39;editDate&#39;: &#39;2021-02-21 22:45:01&#39;, &#39;lastScan&#39;: &#39;2021-02-21 22:45:01&#39;, &#39;lastDiscovery&#39;: &#39;2021-02-21 22:45:01&#39;, &#39;nameservers&#39;: {&#39;id&#39;: &#39;1&#39;, &#39;name&#39;: &#39;Google NS&#39;, &#39;namesrv1&#39;: &#39;8.8.8.8;8.8.4.4&#39;, &#39;description&#39;: &#39;Google public nameservers&#39;, &#39;permissions&#39;: &#39;1;2&#39;, &#39;editDate&#39;: None}},... Nice! Let's make it a bit more friendly:
&gt;&gt;&gt; subnets = subnets.json()[&#39;data&#39;] &gt;&gt;&gt; for subnet in subnets: ... print(&#34;Found subnet: &#34; + subnet[&#39;description&#39;]) ... Found subnet: Home Network Found subnet: 1610-Management Found subnet: 1620-Servers-1 Found subnet: 1630-Servers-2 Found subnet: VPN Subnet Found subnet: 1640-Servers-3 Found subnet: 1650-Servers-4 Found subnet: 1660-Servers-5 We're in business!
Now that I know how to talk to phpIPAM via its RESP API, it's time to figure out how to get vRA to speak that language.
Step 4: Getting started with the vRA Third-Party IPAM SDK I downloaded the SDK from here↗. It's got a pretty good README↗ which describes the requirements (Java 8+, Maven 3, Python3, Docker, internet access) as well as how to build the package. I also consulted this white paper↗ which describes the inputs provided by vRA and the outputs expected from the IPAM integration.
The README tells you to extract the .zip and make a simple modification to the pom.xml file to &quot;brand&quot; the integration:
# torchlight! {&#34;lineNumbers&#34;: true} &lt;properties&gt; &lt;provider.name&gt;phpIPAM&lt;/provider.name&gt; &lt;!-- [tl! focus:2] --&gt; &lt;provider.description&gt;phpIPAM integration for vRA&lt;/provider.description&gt; &lt;provider.version&gt;1.0.3&lt;/provider.version&gt; &lt;provider.supportsAddressSpaces&gt;false&lt;/provider.supportsAddressSpaces&gt; &lt;provider.supportsUpdateRecord&gt;true&lt;/provider.supportsUpdateRecord&gt; &lt;provider.supportsOnDemandNetworks&gt;false&lt;/provider.supportsOnDemandNetworks&gt; &lt;user.id&gt;1000&lt;/user.id&gt; &lt;/properties&gt; You can then kick off the build with mvn package -PcollectDependencies -Duser.id=\${UID}, which will (eventually) spit out ./target/phpIPAM.zip. You can then import the package to vRA↗ and test it against the httpbin.org hostname to validate that the build process works correctly.
You'll notice that the form includes fields for Username, Password, and Hostname; we'll also need to specify the API app ID. This can be done by editing ./src/main/resources/endpoint-schema.json. I added an apiAppId field:
// torchlight! {&#34;lineNumbers&#34;:true} { &#34;layout&#34;:{ &#34;pages&#34;:[ { &#34;id&#34;:&#34;Sample IPAM&#34;, &#34;title&#34;:&#34;Sample IPAM endpoint&#34;, &#34;sections&#34;:[ { &#34;id&#34;:&#34;section_1&#34;, &#34;fields&#34;:[ { &#34;id&#34;:&#34;apiAppId&#34;, // [tl! focus] &#34;display&#34;:&#34;textField&#34; }, { &#34;id&#34;:&#34;privateKeyId&#34;, &#34;display&#34;:&#34;textField&#34; }, { &#34;id&#34;:&#34;privateKey&#34;, &#34;display&#34;:&#34;passwordField&#34; }, { &#34;id&#34;:&#34;hostName&#34;, &#34;display&#34;:&#34;textField&#34; } ] } ] } ] }, &#34;schema&#34;:{ &#34;apiAppId&#34;:{ &#34;type&#34;:{ &#34;dataType&#34;:&#34;string&#34; }, &#34;label&#34;:&#34;API App ID&#34;, // [tl! focus] &#34;constraints&#34;:{ &#34;required&#34;:true } }, &#34;privateKeyId&#34;:{ &#34;type&#34;:{ &#34;dataType&#34;:&#34;string&#34; }, &#34;label&#34;:&#34;Username&#34;, &#34;constraints&#34;:{ &#34;required&#34;:true } }, &#34;privateKey&#34;:{ &#34;label&#34;:&#34;Password&#34;, &#34;type&#34;:{ &#34;dataType&#34;:&#34;secureString&#34; }, &#34;constraints&#34;:{ &#34;required&#34;:true } }, &#34;hostName&#34;:{ &#34;type&#34;:{ &#34;dataType&#34;:&#34;string&#34; }, &#34;label&#34;:&#34;Hostname&#34;, &#34;constraints&#34;:{ &#34;required&#34;:true } } }, &#34;options&#34;:{ } } Update
Check out the source on GitHub↗ to see how I adjusted the schema to support custom field input.
We've now got the framework in place so let's move on to the first operation we'll need to write. Each operation has its own subfolder under ./src/main/python/, and each contains (among other things) a requirements.txt file which will tell Maven what modules need to be imported and a source.py file which is where the magic happens.
Step 5: 'Validate Endpoint' action We already basically wrote this earlier with the manual tests against the phpIPAM API. This operation basically just needs to receive the endpoint details and credentials from vRA, test the connection against the API, and let vRA know whether or not it was able to authenticate successfully. So let's open ./src/main/python/validate_endpoint/source.py and get to work.
It's always a good idea to start by reviewing the example payload section so that we'll know what data we have to work with:
&#39;&#39;&#39; Example payload: &#34;inputs&#34;: { &#34;authCredentialsLink&#34;: &#34;/core/auth/credentials/13c9cbade08950755898c4b89c4a0&#34;, &#34;endpointProperties&#34;: { &#34;hostName&#34;: &#34;sampleipam.sof-mbu.eng.vmware.com&#34; } } &#39;&#39;&#39; The do_validate_endpoint function has a handy comment letting us know that's where we'll drop in our code:
# torchlight! {&#34;lineNumbers&#34;: true} def do_validate_endpoint(self, auth_credentials, cert): # Your implemention goes here username = auth_credentials[&#34;privateKeyId&#34;] password = auth_credentials[&#34;privateKey&#34;] try: response = requests.get(&#34;https://&#34; + self.inputs[&#34;endpointProperties&#34;][&#34;hostName&#34;], verify=cert, auth=(username, password)) The example code gives us a nice start at how we'll get our inputs from vRA. So let's expand that a bit:
# torchlight! {&#34;lineNumbers&#34;: true} def do_validate_endpoint(self, auth_credentials, cert): # Build variables username = auth_credentials[&#34;privateKeyId&#34;] password = auth_credentials[&#34;privateKey&#34;] hostname = self.inputs[&#34;endpointProperties&#34;][&#34;hostName&#34;] apiAppId = self.inputs[&#34;endpointProperties&#34;][&#34;apiAppId&#34;] As before, we'll construct the &quot;base&quot; URI by inserting the hostname and apiAppId, and we'll combine the username and password into our auth variable:
# torchlight! {&#34;lineNumbers&#34;: true} uri = f&#39;https://{hostname}/api/{apiAppId}/ auth = (username, password) I realized that I'd be needing to do the same authentication steps for each one of these operations, so I created a new auth_session() function to do the heavy lifting. Other operations will also need to return the authorization token but for this run we really just need to know whether the authentication was successful, which we can do by checking req.status_code.
# torchlight! {&#34;lineNumbers&#34;: true} def auth_session(uri, auth, cert): auth_uri = f&#39;{uri}/user/&#39; req = requests.post(auth_uri, auth=auth, verify=cert) return req And we'll call that function from do_validate_endpoint():
# torchlight! {&#34;lineNumbers&#34;: true} # Test auth connection try: response = auth_session(uri, auth, cert) if response.status_code == 200: return { &#34;message&#34;: &#34;Validated successfully&#34;, &#34;statusCode&#34;: &#34;200&#34; } You can view the full code here↗.
After completing each operation, run mvn package -PcollectDependencies -Duser.id=\${UID} to build again, and then import the package to vRA again. This time, you'll see the new &quot;API App ID&quot; field on the form: Confirm that everything worked correctly by hopping over to the Extensibility tab, selecting Action Runs on the left, and changing the User Runs filter to say Integration Runs. Select the newest phpIPAM_ValidateEndpoint action and make sure it has a happy green Completed status. You can also review the Inputs to make sure they look like what you expected:
// torchlight! {&#34;lineNumbers&#34;: true} { &#34;__metadata&#34;: { &#34;headers&#34;: { &#34;tokenId&#34;: &#34;c/FqqI+i9WF47JkCxBsy8uoQxjyq+nlH0exxLYDRzTk=&#34; }, &#34;sourceType&#34;: &#34;ipam&#34; }, &#34;endpointProperties&#34;: { &#34;dcId&#34;: &#34;onprem&#34;, &#34;apiAppId&#34;: &#34;vra&#34;, &#34;hostName&#34;: &#34;ipam.lab.bowdre.net&#34;, &#34;properties&#34;: &#34;[{\\&#34;prop_key\\&#34;:\\&#34;phpIPAM.IPAM.apiAppId\\&#34;,\\&#34;prop_value\\&#34;:\\&#34;vra\\&#34;}]&#34;, &#34;providerId&#34;: &#34;301de00f-d267-4be2-8065-fabf48162dc1&#34;, And we can see that the Outputs reflect our successful result:
{ &#34;message&#34;: &#34;Validated successfully&#34;, &#34;statusCode&#34;: &#34;200&#34; } That's one operation in the bank!
Step 6: 'Get IP Ranges' action So vRA can authenticate against phpIPAM; next, let's actually query to get a list of available IP ranges. This happens in ./src/main/python/get_ip_ranges/source.py. We'll start by pulling over our auth_session() function and flesh it out a bit more to return the authorization token:
# torchlight! {&#34;lineNumbers&#34;: true} def auth_session(uri, auth, cert): auth_uri = f&#39;{uri}/user/&#39; req = requests.post(auth_uri, auth=auth, verify=cert) if req.status_code != 200: raise requests.exceptions.RequestException(&#39;Authentication Failure!&#39;) token = {&#34;token&#34;: req.json()[&#39;data&#39;][&#39;token&#39;]} return token We'll then modify do_get_ip_ranges() with our needed variables, and then call auth_session() to get the necessary token:
# torchlight! {&#34;lineNumbers&#34;: true} def do_get_ip_ranges(self, auth_credentials, cert): # Build variables username = auth_credentials[&#34;privateKeyId&#34;] password = auth_credentials[&#34;privateKey&#34;] hostname = self.inputs[&#34;endpoint&#34;][&#34;endpointProperties&#34;][&#34;hostName&#34;] apiAppId = self.inputs[&#34;endpoint&#34;][&#34;endpointProperties&#34;][&#34;apiAppId&#34;] uri = f&#39;https://{hostname}/api/{apiAppId}/&#39; auth = (username, password) # Auth to API token = auth_session(uri, auth, cert) We can then query for the list of subnets, just like we did earlier:
# torchlight! {&#34;lineNumbers&#34;: true} # Request list of subnets subnet_uri = f&#39;{uri}/subnets/&#39; ipRanges = [] subnets = requests.get(f&#39;{subnet_uri}?filter_by=isPool&amp;filter_value=1&#39;, headers=token, verify=cert) subnets = subnets.json()[&#39;data&#39;] I decided to add the extra filter_by=isPool&amp;filter_value=1 argument to the query so that it will only return subnets marked as a pool in phpIPAM. This way I can use phpIPAM for monitoring address usage on a much larger set of subnets while only presenting a handful of those to vRA.
Update
I now filter for networks identified by the designated custom field like so:
# torchlight! {&#34;lineNumbers&#34;: true} # Request list of subnets subnet_uri = f&#39;{uri}/subnets/&#39; if enableFilter == &#34;true&#34;: queryFilter = f&#39;filter_by={filterField}&amp;filter_value={filterValue}&#39; logging.info(f&#34;Searching for subnets matching filter: {queryFilter}&#34;) else: queryFilter = &#39;&#39; logging.info(f&#34;Searching for all known subnets&#34;) ipRanges = [] subnets = requests.get(f&#39;{subnet_uri}?{queryFilter}&#39;, headers=token, verify=cert) subnets = subnets.json()[&#39;data&#39;] Now is a good time to consult that white paper↗ to confirm what fields I'll need to return to vRA. That lets me know that I'll need to return ipRanges which is a list of IpRange objects. IpRange requires id, name, startIPAddress, endIPAddress, ipVersion, and subnetPrefixLength properties. It can also accept description, gatewayAddress, and dnsServerAddresses properties, among others. Some of these properties are returned directly by the phpIPAM API, but others will need to be computed on the fly.
For instance, these are pretty direct matches:
# torchlight! {&#34;lineNumbers&#34;: true} ipRange[&#39;id&#39;] = str(subnet[&#39;id&#39;]) ipRange[&#39;description&#39;] = str(subnet[&#39;description&#39;]) ipRange[&#39;subnetPrefixLength&#39;] = str(subnet[&#39;mask&#39;]) phpIPAM doesn't return a name field but I can construct one that will look like 172.16.20.0/24:
ipRange[&#39;name&#39;] = f&#34;{str(subnet[&#39;subnet&#39;])}/{str(subnet[&#39;mask&#39;])}&#34; Working with IP addresses in Python can be greatly simplified by use of the ipaddress module, so I added an import ipaddress statement near the top of the file. I also added it to requirements.txt to make sure it gets picked up by the Maven build. I can then use that to figure out the IP version as well as computing reasonable start and end IP addresses:
# torchlight! {&#34;lineNumbers&#34;: true} network = ipaddress.ip_network(str(subnet[&#39;subnet&#39;]) + &#39;/&#39; + str(subnet[&#39;mask&#39;])) ipRange[&#39;ipVersion&#39;] = &#39;IPv&#39; + str(network.version) ipRange[&#39;startIPAddress&#39;] = str(network[1]) ipRange[&#39;endIPAddress&#39;] = str(network[-2]) I'd like to try to get the DNS servers from phpIPAM if they're defined, but I also don't want the whole thing to puke if a subnet doesn't have that defined. phpIPAM returns the DNS servers as a semicolon-delineated string; I need them to look like a Python list:
# torchlight! {&#34;lineNumbers&#34;: true} try: ipRange[&#39;dnsServerAddresses&#39;] = [server.strip() for server in str(subnet[&#39;nameservers&#39;][&#39;namesrv1&#39;]).split(&#39;;&#39;)] except: ipRange[&#39;dnsServerAddresses&#39;] = [] I can also nest another API request to find which address is marked as the gateway for a given subnet:
# torchlight! {&#34;lineNumbers&#34;: true} gw_req = requests.get(f&#34;{subnet_uri}/{subnet[&#39;id&#39;]}/addresses/?filter_by=is_gateway&amp;filter_value=1&#34;, headers=token, verify=cert) if gw_req.status_code == 200: gateway = gw_req.json()[&#39;data&#39;][0][&#39;ip&#39;] ipRange[&#39;gatewayAddress&#39;] = gateway And then I merge each of these ipRange objects into the ipRanges list which will be returned to vRA:
# torchlight! {&#34;lineNumbers&#34;: true} ipRanges.append(ipRange) After rearranging a bit and tossing in some logging, here's what I've got:
# torchlight! {&#34;lineNumbers&#34;: true} for subnet in subnets: ipRange = {} ipRange[&#39;id&#39;] = str(subnet[&#39;id&#39;]) ipRange[&#39;name&#39;] = f&#34;{str(subnet[&#39;subnet&#39;])}/{str(subnet[&#39;mask&#39;])}&#34; ipRange[&#39;description&#39;] = str(subnet[&#39;description&#39;]) logging.info(f&#34;Found subnet: {ipRange[&#39;name&#39;]} - {ipRange[&#39;description&#39;]}.&#34;) network = ipaddress.ip_network(str(subnet[&#39;subnet&#39;]) + &#39;/&#39; + str(subnet[&#39;mask&#39;])) ipRange[&#39;ipVersion&#39;] = &#39;IPv&#39; + str(network.version) ipRange[&#39;startIPAddress&#39;] = str(network[1]) ipRange[&#39;endIPAddress&#39;] = str(network[-2]) ipRange[&#39;subnetPrefixLength&#39;] = str(subnet[&#39;mask&#39;]) # return empty set if no nameservers are defined in IPAM try: ipRange[&#39;dnsServerAddresses&#39;] = [server.strip() for server in str(subnet[&#39;nameservers&#39;][&#39;namesrv1&#39;]).split(&#39;;&#39;)] except: ipRange[&#39;dnsServerAddresses&#39;] = [] # try to get the address marked as the gateway in IPAM gw_req = requests.get(f&#34;{subnet_uri}/{subnet[&#39;id&#39;]}/addresses/?filter_by=is_gateway&amp;filter_value=1&#34;, headers=token, verify=cert) if gw_req.status_code == 200: gateway = gw_req.json()[&#39;data&#39;][0][&#39;ip&#39;] ipRange[&#39;gatewayAddress&#39;] = gateway logging.debug(ipRange) ipRanges.append(ipRange) # Return results to vRA result = { &#34;ipRanges&#34; : ipRanges } return result The full code can be found here↗. You may notice that I removed all the bits which were in the VMware-provided skeleton about paginating the results. I honestly wasn't entirely sure how to implement that, and I also figured that since I'm already limiting the results by the is_pool filter I shouldn't have a problem with the IPAM server returning an overwhelming number of IP ranges. That could be an area for future improvement though.
In any case, it's time to once again use mvn package -PcollectDependencies -Duser.id=\${UID} to fire off the build, and then import phpIPAM.zip into vRA.
vRA runs the phpIPAM_GetIPRanges action about every ten minutes so keep checking back on the Extensibility &gt; Action Runs view until it shows up. You can then select the action and review the Log to see which IP ranges got picked up:
[2021-02-21 23:14:04,026] [INFO] - Querying for auth credentials [2021-02-21 23:14:04,051] [INFO] - Credentials obtained successfully! [2021-02-21 23:14:04,089] [INFO] - Found subnet: 172.16.10.0/24 - 1610-Management. [2021-02-21 23:14:04,101] [INFO] - Found subnet: 172.16.20.0/24 - 1620-Servers-1. [2021-02-21 23:14:04,114] [INFO] - Found subnet: 172.16.30.0/24 - 1630-Servers-2. [2021-02-21 23:14:04,126] [INFO] - Found subnet: 172.16.40.0/24 - 1640-Servers-3. [2021-02-21 23:14:04,138] [INFO] - Found subnet: 172.16.50.0/24 - 1650-Servers-4. [2021-02-21 23:14:04,149] [INFO] - Found subnet: 172.16.60.0/24 - 1660-Servers-5. Note that it did not pick up my &quot;Home Network&quot; range since it wasn't set to be a pool.
We can also navigate to Infrastructure &gt; Networks &gt; IP Ranges to view them in all their glory: You can then follow these instructions↗ to associate the external IP ranges with networks available for vRA deployments.
Next, we need to figure out how to allocate an IP.
Step 7: 'Allocate IP' action I think we've got a rhythm going now. So we'll dive in to ./src/main/python/allocate_ip/source.py, create our auth_session() function, and add our variables to the do_allocate_ip() function. I also created a new bundle object to hold the uri, token, and cert items so that I don't have to keep typing those over and over and over.
# torchlight! {&#34;lineNumbers&#34;: true} def auth_session(uri, auth, cert): auth_uri = f&#39;{uri}/user/&#39; req = requests.post(auth_uri, auth=auth, verify=cert) if req.status_code != 200: raise requests.exceptions.RequestException(&#39;Authentication Failure!&#39;) token = {&#34;token&#34;: req.json()[&#39;data&#39;][&#39;token&#39;]} return token def do_allocate_ip(self, auth_credentials, cert): # Build variables username = auth_credentials[&#34;privateKeyId&#34;] password = auth_credentials[&#34;privateKey&#34;] hostname = self.inputs[&#34;endpoint&#34;][&#34;endpointProperties&#34;][&#34;hostName&#34;] apiAppId = self.inputs[&#34;endpoint&#34;][&#34;endpointProperties&#34;][&#34;apiAppId&#34;] uri = f&#39;https://{hostname}/api/{apiAppId}/&#39; auth = (username, password) # Auth to API token = auth_session(uri, auth, cert) bundle = { &#39;uri&#39;: uri, &#39;token&#39;: token, &#39;cert&#39;: cert } I left the remainder of do_allocate_ip() intact but modified its calls to other functions so that my new bundle would be included:
# torchlight! {&#34;lineNumbers&#34;: true} allocation_result = [] try: resource = self.inputs[&#34;resourceInfo&#34;] for allocation in self.inputs[&#34;ipAllocations&#34;]: allocation_result.append(allocate(resource, allocation, self.context, self.inputs[&#34;endpoint&#34;], bundle)) except Exception as e: try: rollback(allocation_result, bundle) except Exception as rollback_e: logging.error(f&#34;Error during rollback of allocation result {str(allocation_result)}&#34;) logging.error(rollback_e) raise e I also added bundle to the allocate() function:
# torchlight! {&#34;lineNumbers&#34;: true} def allocate(resource, allocation, context, endpoint, bundle): last_error = None for range_id in allocation[&#34;ipRangeIds&#34;]: logging.info(f&#34;Allocating from range {range_id}&#34;) try: return allocate_in_range(range_id, resource, allocation, context, endpoint, bundle) except Exception as e: last_error = e logging.error(f&#34;Failed to allocate from range {range_id}: {str(e)}&#34;) logging.error(&#34;No more ranges. Raising last error&#34;) raise last_error The heavy lifting is actually handled in allocate_in_range(). Right now, my implementation only supports doing a single allocation so I added an escape in case someone asks to do something crazy like allocate 2 IPs. I then set up my variables:
# torchlight! {&#34;lineNumbers&#34;: true} def allocate_in_range(range_id, resource, allocation, context, endpoint, bundle): if int(allocation[&#39;size&#39;]) ==1: vmName = resource[&#39;name&#39;] owner = resource[&#39;owner&#39;] uri = bundle[&#39;uri&#39;] token = bundle[&#39;token&#39;] cert = bundle[&#39;cert&#39;] else: # TODO: implement allocation of continuous block of IPs pass raise Exception(&#34;Not implemented&#34;) I construct a payload that will be passed to the phpIPAM API when an IP gets allocated to a VM:
1 2 3 4 payload = { &#39;hostname&#39;: vmName, &#39;description&#39;: f&#39;Reserved by vRA for {owner} at {datetime.now()}&#39; } That timestamp will be handy when reviewing the reservations from the phpIPAM side of things. Be sure to add an appropriate import datetime statement at the top of this file, and include datetime in requirements.txt.
So now we'll construct the URI and post the allocation request to phpIPAM. We tell it which range_id to use and it will return the first available IP.
# torchlight! {&#34;lineNumbers&#34;: true} allocate_uri = f&#39;{uri}/addresses/first_free/{str(range_id)}/&#39; allocate_req = requests.post(allocate_uri, data=payload, headers=token, verify=cert) allocate_req = allocate_req.json() Per the white paper, we'll need to return ipAllocationId, ipAddresses, ipRangeId, and ipVersion to vRA in an AllocationResult. Once again, I'll leverage the ipaddress module for figuring the version (and, once again, I'll add it as an import and to the requirements.txt file).
# torchlight! {&#34;lineNumbers&#34;: true} if allocate_req[&#39;success&#39;]: version = ipaddress.ip_address(allocate_req[&#39;data&#39;]).version result = { &#34;ipAllocationId&#34;: allocation[&#39;id&#39;], &#34;ipRangeId&#34;: range_id, &#34;ipVersion&#34;: &#34;IPv&#34; + str(version), &#34;ipAddresses&#34;: [allocate_req[&#39;data&#39;]] } logging.info(f&#34;Successfully reserved {str(result[&#39;ipAddresses&#39;])} for {vmName}.&#34;) else: raise Exception(&#34;Unable to allocate IP!&#34;) return result I also implemented a hasty rollback() in case something goes wrong and we need to undo the allocation:
# torchlight! {&#34;lineNumbers&#34;: true} def rollback(allocation_result, bundle): uri = bundle[&#39;uri&#39;] token = bundle[&#39;token&#39;] cert = bundle[&#39;cert&#39;] for allocation in reversed(allocation_result): logging.info(f&#34;Rolling back allocation {str(allocation)}&#34;) ipAddresses = allocation.get(&#34;ipAddresses&#34;, None) for ipAddress in ipAddresses: rollback_uri = f&#39;{uri}/addresses/{allocation.get(&#34;id&#34;)}/&#39; requests.delete(rollback_uri, headers=token, verify=cert) return The full allocate_ip code is here↗. Once more, run mvn package -PcollectDependencies -Duser.id=\${UID} and import the new phpIPAM.zip package into vRA. You can then open a Cloud Assembly Cloud Template associated with one of the specified networks and hit the &quot;Test&quot; button to see if it works. You should see a new phpIPAM_AllocateIP action run appear on the Extensibility &gt; Action runs tab. Check the Log for something like this:
[2021-02-22 01:31:41,729] [INFO] - Querying for auth credentials [2021-02-22 01:31:41,757] [INFO] - Credentials obtained successfully! [2021-02-22 01:31:41,773] [INFO] - Allocating from range 12 [2021-02-22 01:31:41,790] [INFO] - Successfully reserved [&#39;172.16.40.2&#39;] for BOW-VLTST-XXX41. You can also check for a reserved address in phpIPAM: Almost done!
Step 8: 'Deallocate IP' action The last step is to remove the IP allocation when a vRA deployment gets destroyed. It starts just like the allocate_ip action with our auth_session() function and variable initialization:
# torchlight! {&#34;lineNumbers&#34;: true} def auth_session(uri, auth, cert): auth_uri = f&#39;{uri}/user/&#39; req = requests.post(auth_uri, auth=auth, verify=cert) if req.status_code != 200: raise requests.exceptions.RequestException(&#39;Authentication Failure!&#39;) token = {&#34;token&#34;: req.json()[&#39;data&#39;][&#39;token&#39;]} return token def do_deallocate_ip(self, auth_credentials, cert): # Build variables username = auth_credentials[&#34;privateKeyId&#34;] password = auth_credentials[&#34;privateKey&#34;] hostname = self.inputs[&#34;endpoint&#34;][&#34;endpointProperties&#34;][&#34;hostName&#34;] apiAppId = self.inputs[&#34;endpoint&#34;][&#34;endpointProperties&#34;][&#34;apiAppId&#34;] uri = f&#39;https://{hostname}/api/{apiAppId}/&#39; auth = (username, password) # Auth to API token = auth_session(uri, auth, cert) bundle = { &#39;uri&#39;: uri, &#39;token&#39;: token, &#39;cert&#39;: cert } deallocation_result = [] for deallocation in self.inputs[&#34;ipDeallocations&#34;]: deallocation_result.append(deallocate(self.inputs[&#34;resourceInfo&#34;], deallocation, bundle)) assert len(deallocation_result) &gt; 0 return { &#34;ipDeallocations&#34;: deallocation_result } And the deallocate() function is basically a prettier version of the rollback() function from the allocate_ip action:
# torchlight! {&#34;lineNumbers&#34;: true} def deallocate(resource, deallocation, bundle): uri = bundle[&#39;uri&#39;] token = bundle[&#39;token&#39;] cert = bundle[&#39;cert&#39;] ip_range_id = deallocation[&#34;ipRangeId&#34;] ip = deallocation[&#34;ipAddress&#34;] logging.info(f&#34;Deallocating ip {ip} from range {ip_range_id}&#34;) deallocate_uri = f&#39;{uri}/addresses/{ip}/{ip_range_id}/&#39; requests.delete(deallocate_uri, headers=token, verify=cert) return { &#34;ipDeallocationId&#34;: deallocation[&#34;id&#34;], &#34;message&#34;: &#34;Success&#34; } You can review the full code here↗. Build the package with Maven, import to vRA, and run another test deployment. The phpIPAM_DeallocateIP action should complete successfully. Something like this will be in the log:
[2021-02-22 01:36:29,438] [INFO] - Querying for auth credentials [2021-02-22 01:36:29,461] [INFO] - Credentials obtained successfully! [2021-02-22 01:36:29,476] [INFO] - Deallocating ip 172.16.40.3 from range 12 And the Outputs section of the Details tab will show:
// torchlight! {&#34;lineNumbers&#34;: true} { &#34;ipDeallocations&#34;: [ { &#34;message&#34;: &#34;Success&#34;, &#34;ipDeallocationId&#34;: &#34;/resources/network-interfaces/8e149a2c-d7aa-4e48-b6c6-153ed288aef3&#34; } ] } Success! That's it! You can now use phpIPAM for assigning IP addresses to VMs deployed from vRealize Automation 8.x. VMware provides a few additional operations that could be added to this integration in the future (like updating existing records or allocating entire ranges rather than individual IPs) but what I've written so far satisfies the basic requirements, and it works well for my needs.
And maybe, just maybe, the steps I went through developing this integration might help with integrating another IPAM solution.
`,description:"",url:"https://runtimeterror.dev/integrating-phpipam-with-vrealize-automation-8/"},"https://runtimeterror.dev/using-vs-code-to-explore-giant-log-bundles/":{title:"Using VS Code to explore giant log bundles",tags:["logs","vmware"],content:`I recently ran into a peculiar issue after upgrading my vRealize Automation homelab to the new 8.3 release, and the error message displayed in the UI didn't give me a whole lot of information to work with: I connected to the vRA appliance to try to find the relevant log excerpt, but doing so isn't all that straightforward↗ given the containerized nature of the services. So instead I used the vracli log-bundle command to generate a bundle of all relevant logs, and I then transferred the resulting (2.2GB!) log-bundle.tar to my workstation for further investigation. I expanded the tar and ran tree -P '*.log' to get a quick idea of what I've got to deal with: Ugh. Even if I knew which logs I wanted to look at (and I don't) it would take ages to dig through all of this. There's got to be a better way.
And there is! Visual Studio Code lets you open an entire directory tree in the editor: You can then &quot;Find in Files&quot; with Ctrl+Shift+F, and VS Code will very quickly search through all the files to find what you're looking for: You can also click the &quot;Open in editor&quot; link at the top of the search results to open the matching snippets in a single view: Adjusting the number at the far top right of that view will dynamically tweak how many context lines are included with each line containing the search term.
In this case, the logs didn't actually tell me what was going wrong - but I felt much better for having explored them! Maybe this little trick will help you track down what's ailing you.
`,description:"",url:"https://runtimeterror.dev/using-vs-code-to-explore-giant-log-bundles/"},"https://runtimeterror.dev/vmware-home-lab-on-intel-nuc-9/":{title:"VMware Home Lab on Intel NUC 9",tags:["vmware","homelab","vra","lcm"],content:`I picked up an Intel NUC 9 Extreme kit a few months back (thanks, VMware!) and have been slowly tinkering with turning it into an extremely capable self-contained home lab environment. I'm pretty happy with where things sit right now so figured it was about time to start documenting and sharing what I've done.
Hardware (Caution: here be affiliate links)
Intel NUC 9 Extreme (NUC9i9QNX)↗ Crucial 64GB DDR4 SO-DIMM kit (CT2K32G4SFD8266)↗ Intel 665p 1TB NVMe SSD (SSDPEKNW010T9X1)↗ Random 8GB USB thumbdrive I found in a drawer somewhere The NUC runs ESXi 7.0u1 and currently hosts the following:
vCenter Server 7.0u1 Windows 2019 domain controller VyOS router↗ Home Assistant OS 5.9↗ vRealize Lifecycle Manager 8.2 vRealize Identity Manager 3.3.2 vRealize Automation 8.2 3-node nested ESXi 7.0u1↗ vSAN cluster I'm leveraging my $200 vMUG Advantage subscription↗ to provide 365-day licenses for all the VMware bits (particularly vRA, which doesn't come with a built-in evaluation option).
Basic Infrastructure Setting up the NUC The NUC connects to my home network through its onboard gigabit Ethernet interface (vmnic0). (The NUC does have a built-in WiFi adapter but for some reason VMware hasn't yet allowed their hypervisor to connect over WiFi - weird, right?) I wanted to use a small 8GB thumbdrive as the host's boot device so I installed that in one of the NUC's internal USB ports. For the purpose of installation, I connected a keyboard and monitor to the NUC, and I configured the BIOS to automatically boot up when power is restored after a power failure.
I used the Chromebook Recovery Utility to write the ESXi installer ISO to another USB drive (how-to here), inserted that bootable drive to a port on the front of the NUC, and booted the NUC from the drive. Installing ESXi 7.0u1 was as easy as it could possibly be. All hardware was automatically detected and the appropriate drivers loaded. Once the host booted up, I used the DCUI to configure a static IP address (192.168.1.11). I then shut down the NUC, disconnected the keyboard and monitor, and moved it into the cabinet where it will live out its headless existence.
I was then able to point my web browser to https://192.168.1.11/ui/ to log in to the host and get down to business. First stop: networking. For now, I only need a single standard switch (vSwitch0) with two portgroups: one for the host's vmkernel interface, and the other for the VMs (including the nested ESXi appliances) that are going to run directly on this physical host. The one &quot;gotcha&quot; when working with a nested environment is that you'll need to edit the virtual switch's security settings to &quot;Allow promiscuous mode&quot; and &quot;Allow forged transmits&quot; (for reasons described here↗). I created a single datastore to span the entirety of that 1TB NVMe drive. The nested ESXi hosts will use VMDKs stored here to provide storage to the nested VMs. Domain Controller I created a new Windows VM with 2 vCPUs, 4GB of RAM, and a 90GB virtual hard drive, and I booted it off a Server 2019 evaluation ISO↗. I gave it a name, a static IP address, and proceeded to install and configure the Active Directory Domain Services and DNS Server roles. I created static A and PTR records for the vCenter Server Appliance I'd be deploying next (vcsa.) and the physical host (nuchost.). I configured ESXi to use this new server for DNS resolutions, and confirmed that I could resolve the VCSA's name from the host.
Before moving on, I installed the Chrome browser on this new Windows VM and also set up remote access via Chrome Remote Desktop↗. This will let me remotely access and manage my lab environment without having to punch holes in the router firewall (or worry about securing said holes). And it's got &quot;chrome&quot; in the name so it will work just fine from my Chromebooks!
vCenter I attached the vCSA installation ISO to the Windows VM and performed the vCenter deployment from there. (See, I told you that Chrome Remote Desktop would come in handy!) After the vCenter was deployed and the basic configuration completed, I created a new cluster to contain the physical host. There's likely only ever going to be the one physical host but I like being able to logically group hosts in this way, particularly when working with PowerCLI. I then added the host to the vCenter by its shiny new FQDN. I've now got a fully-functioning VMware lab, complete with a physical hypervisor to run the workloads, a vCenter server to manage the workloads, and a Windows DNS server to tell the workloads how to talk to each other. Since the goal is to ultimately simulate a (small) production environment, let's set up some additional networking before we add anything else.
Networking Overview My home network uses the generic 192.168.1.0/24 address space, with internet router providing DHCP addresses in the range .100-.250. I'm using the range 192.168.1.2-.99 for statically-configured IPs, particularly those within my lab environment. Here are the addresses being used by the lab so far:
IP Address Hostname Purpose 192.168.1.1 Gateway 192.168.1.5 win01 AD DC, DNS 192.168.1.11 nuchost Physical ESXi host 192.168.1.12 vcsa vCenter Server Of course, not everything that I'm going to deploy in the lab will need to be accessible from outside the lab environment. This goes for obvious things like the vMotion and vSAN networks of the nested ESXi hosts, but it will also be useful to have internal networks that can be used by VMs provisioned by vRA. So I'll be creating these networks:
VLAN ID Network Purpose 1610 172.16.10.0/24 Management 1620 172.16.20.0/24 Servers-1 1630 172.16.30.0/24 Servers-2 1698 172.16.98.0/24 vSAN 1699 172.16.99.0/24 vMotion vSwitch1 I'll start by adding a second vSwitch to the physical host. It doesn't need a physical adapter assigned since this switch will be for internal traffic. I create two port groups: one tagged for the VLAN 1610 Management traffic, which will be useful for attaching VMs on the physical host to the internal network; and the second will use VLAN 4095 to pass all VLAN traffic to the nested ESXi hosts. And again, this vSwitch needs to have its security policy set to allow Promiscuous Mode and Forged Transmits. I also set the vSwitch to support an MTU of 9000 so I can use Jumbo Frames on the vMotion and vSAN networks.
VyOS Wouldn't it be great if the VMs that are going to be deployed on those 1610, 1620, and 1630 VLANs could still have their traffic routed out of the internal networks? But doing routing requires a router (or so my network friends tell me)... so I deployed a VM running the open-source VyOS router platform. I used William Lam's instructions for installing VyOS↗, making sure to attach the first network interface to the Home-Network portgroup and the second to the Isolated portgroup (VLAN 4095). I then set to work configuring the router↗.
After logging in to the VM, I entered the router's configuration mode:
configure # [tl! .cmd] [edit] # [tl! .nocopy] I then started with setting up the interfaces - eth0 for the 192.168.1.0/24 network, eth1 on the trunked portgroup, and a number of VIFs on eth1 to handle the individual VLANs I'm interested in using.
set interfaces ethernet eth0 address &#39;192.168.1.8/24&#39; # [tl! .cmd_root:start] set interfaces ethernet eth0 description &#39;Outside&#39; set interfaces ethernet eth1 mtu &#39;9000&#39; set interfaces ethernet eth1 vif 1610 address &#39;172.16.10.1/24&#39; set interfaces ethernet eth1 vif 1610 description &#39;VLAN 1610 for Management&#39; set interfaces ethernet eth1 vif 1610 mtu &#39;1500&#39; set interfaces ethernet eth1 vif 1620 address &#39;172.16.20.1/24&#39; set interfaces ethernet eth1 vif 1620 description &#39;VLAN 1620 for Servers-1&#39; set interfaces ethernet eth1 vif 1620 mtu &#39;1500&#39; set interfaces ethernet eth1 vif 1630 address &#39;172.16.30.1/24&#39; set interfaces ethernet eth1 vif 1630 description &#39;VLAN 1630 for Servers-2&#39; set interfaces ethernet eth1 vif 1630 mtu &#39;1500&#39; set interfaces ethernet eth1 vif 1698 description &#39;VLAN 1698 for vSAN&#39; set interfaces ethernet eth1 vif 1698 mtu &#39;9000&#39; set interfaces ethernet eth1 vif 1699 description &#39;VLAN 1699 for vMotion&#39; set interfaces ethernet eth1 vif 1699 mtu &#39;9000&#39; # [tl! .cmd_root:end] I also set up NAT for the networks that should be routable:
set nat source rule 10 outbound-interface &#39;eth0&#39; # [tl! .cmd_root:start] set nat source rule 10 source address &#39;172.16.10.0/24&#39; set nat source rule 10 translation address &#39;masquerade&#39; set nat source rule 20 outbound-interface &#39;eth0&#39; set nat source rule 20 source address &#39;172.16.20.0/24&#39; set nat source rule 20 translation address &#39;masquerade&#39; set nat source rule 30 outbound-interface &#39;eth0&#39; set nat source rule 30 source address &#39;172.16.30.0/24&#39; set nat source rule 30 translation address &#39;masquerade&#39; set nat source rule 100 outbound-interface &#39;eth0&#39; set nat source rule 100 translation address &#39;masquerade&#39; set protocols static route 0.0.0.0/0 next-hop 192.168.1.1 # [tl! .cmd_root:end] And I configured DNS forwarding:
set service dns forwarding allow-from &#39;0.0.0.0/0&#39; # [tl! .cmd_root:start] set service dns forwarding domain 10.16.172.in-addr.arpa. server &#39;192.168.1.5&#39; set service dns forwarding domain 20.16.172.in-addr.arpa. server &#39;192.168.1.5&#39; set service dns forwarding domain 30.16.172.in-addr.arpa. server &#39;192.168.1.5&#39; set service dns forwarding domain lab.bowdre.net server &#39;192.168.1.5&#39; set service dns forwarding listen-address &#39;172.16.10.1&#39; set service dns forwarding listen-address &#39;172.16.20.1&#39; set service dns forwarding listen-address &#39;172.16.30.1&#39; set service dns forwarding name-server &#39;192.168.1.1&#39; # [tl! .cmd_root:end] Finally, I also configured VyOS's DHCP server so that I won't have to statically configure the networking for VMs deployed from vRA:
set service dhcp-server shared-network-name SCOPE_10_MGMT authoritative # [tl! .cmd_root:start] set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 default-router &#39;172.16.10.1&#39; set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 dns-server &#39;192.168.1.5&#39; set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 domain-name &#39;lab.bowdre.net&#39; set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 lease &#39;86400&#39; set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 range 0 start &#39;172.16.10.100&#39; set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 range 0 stop &#39;172.16.10.200&#39; set service dhcp-server shared-network-name SCOPE_20_SERVERS authoritative set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet 172.16.20.0/24 default-router &#39;172.16.20.1&#39; set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet 172.16.20.0/24 dns-server &#39;192.168.1.5&#39; set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet 172.16.20.0/24 domain-name &#39;lab.bowdre.net&#39; set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet 172.16.20.0/24 lease &#39;86400&#39; set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet 172.16.20.0/24 range 0 start &#39;172.16.20.100&#39; set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet 172.16.20.0/24 range 0 stop &#39;172.16.20.200&#39; set service dhcp-server shared-network-name SCOPE_30_SERVERS authoritative set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 default-router &#39;172.16.30.1&#39; set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 dns-server &#39;192.168.1.5&#39; set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 domain-name &#39;lab.bowdre.net&#39; set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 lease &#39;86400&#39; set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 range 0 start &#39;172.16.30.100&#39; set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 range 0 stop &#39;172.16.30.200&#39; # [tl! .cmd_root:end] Satisfied with my work, I ran the commit and save commands. BOOM, this server jockey just configured a router!
Nested vSAN Cluster Alright, it's time to start building up the nested environment. To start, I grabbed the latest Nested ESXi Virtual Appliance .ova↗, courtesy of William Lam. I went ahead and created DNS records for the hosts I'd be deploying, and I mapped out what IPs would be used on each VLAN:
Hostname 1610-Management 1698-vSAN 1699-vMotion esxi01.lab.bowdre.net 172.16.10.21 172.16.98.21 172.16.99.21 esxi02.lab.bowdre.net 172.16.10.22 172.16.98.22 172.16.99.22 esxi03.lab.bowdre.net 172.16.10.23 172.16.98.23 172.16.99.23 Deploying the virtual appliances is just like any other &quot;Deploy OVF Template&quot; action. I placed the VMs on the physical-cluster compute resource, and selected to thin provision the VMDKs on the local datastore. I chose the &quot;Isolated&quot; VM network which uses VLAN 4095 to make all the internal VLANs available on a single portgroup.
And I set the networking properties accordingly:
These virtual appliances come with 3 hard drives. The first will be used as the boot device, the second for vSAN caching, and the third for vSAN capacity. I doubled the size of the second and third drives, to 8GB and 16GB respectively:
After booting the new host VMs, I created a new cluster in vCenter and then added the nested hosts: Next, I created a new Distributed Virtual Switch to break out the VLAN trunk on the nested host &quot;physical&quot; adapters into the individual VLANs I created on the VyOS router. Again, each port group will need to allow Promiscuous Mode and Forged Transmits, and I set the dvSwitch MTU size to 9000 (to support Jumbo Frames on the vSAN and vMotion portgroups). I migrated the physical NICs and vmk0 to the new dvSwitch, and then created new vmkernel interfaces for vMotion and vSAN traffic on each of the nested hosts: I then ssh'd into the hosts and used vmkping to make sure they could talk to each other over these interfaces. I changed the vMotion interface to use the vMotion TCP/IP stack so needed to append the -S vmotion flag to the command:
vmkping -I vmk1 172.16.98.22 # [tl! .cmd_root] PING 172.16.98.22 (172.16.98.22): 56 data bytes # [tl! .nocopy:start] 64 bytes from 172.16.98.22: icmp_seq=0 ttl=64 time=0.243 ms 64 bytes from 172.16.98.22: icmp_seq=1 ttl=64 time=0.260 ms 64 bytes from 172.16.98.22: icmp_seq=2 ttl=64 time=0.262 ms --- 172.16.98.22 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.243/0.255/0.262 ms # [tl! .nocopy:end] vmkping -I vmk2 172.16.99.22 -S vmotion # [tl! .cmd_root] PING 172.16.99.22 (172.16.99.22): 56 data bytes # [tl! .nocopy:start] 64 bytes from 172.16.99.22: icmp_seq=0 ttl=64 time=0.202 ms 64 bytes from 172.16.99.22: icmp_seq=1 ttl=64 time=0.312 ms 64 bytes from 172.16.99.22: icmp_seq=2 ttl=64 time=0.242 ms --- 172.16.99.22 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.202/0.252/0.312 ms # [tl! .nocopy:end] Okay, time to throw some vSAN on these hosts. Select the cluster object, go to the configuration tab, scroll down to vSAN, and click &quot;Turn on vSAN&quot;. This will be a single site cluster, and I don't need to enable any additional services. When prompted, I claim the 8GB drives for the cache tier and the 16GB drives for capacity. It'll take a few minutes for vSAN to get configured on the cluster. Huzzah! Next stop:
vRealize Automation 8.2 The vRealize Easy Installer↗ makes it, well, easy to install vRealize Automation (and vRealize Orchestrator, on the same appliance) and its prerequisites, vRealize Suite Lifecycle Manager (LCM) and Workspace ONE Access (formerly VMware Identity Manager) - provided that you've got enough resources. The vRA virtual appliance deploys with a whopping 40GB of memory allocated to it. Post-deployment, I found that I was able to trim that down to 30GB without seeming to break anything, but going much lower than that would result in services failing to start.
Anyhoo, each of these VMs will need to be resolvable in DNS so I started by creating some A records:
FQDN IP lcm.lab.bowdre.net 192.168.1.40 idm.lab.bowdre.net 192.168.1.41 vra.lab.bowdre.net 192.168.1.42 I then attached the installer ISO to my Windows VM and ran through the installation from there. Similar to the vCenter deployment process, this one prompts you for all the information it needs up front and then takes care of everything from there. That's great news because this is a pretty long deployment; it took probably two hours from clicking the final &quot;Okay, do it&quot; button to being able to log in to my shiny new vRealize Automation environment.
Wrap-up So that's a glimpse into how I built my nested ESXi lab - all for the purpose of being able to develop and test vRealize Automation templates and vRealize Orchestrator workflows in a semi-realistic environment. I've used this setup to write a vRA integration for using phpIPAM↗ to assign static IP addresses to deployed VMs. I wrote a complicated vRO workflow for generating unique hostnames which fit a corporate naming standard and don't conflict with any other names in vCenter, Active Directory, or DNS. I also developed a workflow for (optionally) creating AD objects under appropriate OUs based on properties generated on the cloud template; VMware just announced↗ similar functionality with vRA 8.3 and, honestly, my approach works much better for my needs anyway. And, most recently, I put the finishing touches on a solution for (optionally) creating static records in a Microsoft DNS server from vRO.
I'll post more about all that work soon but this post has already gone on long enough. Stay tuned!
`,description:"",url:"https://runtimeterror.dev/vmware-home-lab-on-intel-nuc-9/"},"https://runtimeterror.dev/psa-halt-replication-before-snapshotting-linked-vcenters/":{title:"PSA: halt replication before snapshotting linked vCenters",tags:["vmware"],content:`It's a good idea to take a snapshot of your virtual appliances before applying any updates, just in case. When you have multiple vCenter appliances operating in Enhanced Link Mode, though, it's important to make sure that the snapshots are in a consistent state. The vCenter vmdird service is responsible for continuously syncing data between the vCenters within a vSphere Single Sign-On (SSO) domain. Reverting to a snapshot where vmdird's knowledge of the environment dramatically differed from that of the other vCenters could cause significant problems down the road or even result in having to rebuild a vCenter from scratch.
(Yes, that's a lesson I learned the hard way - and warnings about that are tragically hard to come by from what I've seen. So I'm sharing my notes so that you can avoid making the same mistake.)
Take these steps when you need to snapshot linked vCenters to avoid breaking replication:
Open an SSH session to all the vCenters within the SSO domain.
Log in and enter shell to access the shell on each vCenter.
Verify that replication is healthy by running /usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w [SSO_ADMIN_PASSWORD] on each vCenter. You want to ensure that each host shows as available to all other hosts, and the message that Partner is 0 changes behind.:
/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass # [tl! .cmd] Partner: vcsa2.lab.bowdre.net # [tl! .nocopy:6] Host available: Yes Status available: Yes My last change number: 9346 Partner has seen my change number: 9346 Partner is 0 changes behind. # [tl! highlight] /usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass # [tl! .cmd] Partner: vcsa.lab.bowdre.net # [tl! .nocopy:6] Host available: Yes Status available: Yes My last change number: 9518 Partner has seen my change number: 9518 Partner is 0 changes behind. # [tl! highlight] Stop vmdird on each vCenter by running /bin/service-control --stop vmdird:
/bin/service-control --stop vmdird # [tl! .cmd] Operation not cancellable. Please wait for it to finish... # [tl! .nocopy:2] Performing stop operation on service vmdird... Successfully stopped service vmdird Snapshot the vCenter appliance VMs.
Start replication on each server again with /bin/service-control --start vmdird:
/bin/service-control --start vmdird # [tl! .cmd] Operation not cancellable. Please wait for it to finish... # [tl! .nocopy] Performing start operation on service vmdird... Successfully started service vmdird Check the replication status with /usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w [SSO_ADMIN_PASSWORD] again just to be sure. Don't proceed with whatever else you were planning to do until you've confirmed that the vCenters are in sync.
You can learn more about the vdcrepadmin utility here: https://kb.vmware.com/s/article/2127057↗
`,description:"",url:"https://runtimeterror.dev/psa-halt-replication-before-snapshotting-linked-vcenters/"},"https://runtimeterror.dev/burn-an-iso-to-usb-with-the-chromebook-recovery-utility/":{title:"Burn an ISO to USB with the Chromebook Recovery Utility",tags:["chromeos"],content:`There are a number of fantastic Windows applications for creating bootable USB drives from ISO images - but those don't work on a Chromebook. Fortunately there's an easily-available tool which will do the trick: Google's own Chromebook Recovery Utility↗ app.
Normally that tool is used to creating bootable media to reinstall Chrome OS on a broken Chromebook↗ (hence the name) but it also has the capability to write other arbitrary images as well. So if you find yourself needing to create a USB drive for installing ESXi on a computer in your home lab (more on that soon!) here's what you'll need to do:
Install the Chromebook Recovery Utility↗. Download the ISO you intend to use. Rename the file to append .bin on the end, after the .iso bit: Plug in the USB drive you're going to sacrifice for this effort - remember that ALL data on the drive will be erased. Open the recovery utility, click on the gear icon at the top right, and select the Use local image option: Browse to and select the *.iso.bin file. Choose the USB drive, and click Continue. Click Create now to start the writing! All done! It probably won't work great for actually recovering your Chromebook but will do wonders for installing ESXi (or whatever) on another computer! You can also use the CRU to make a bootable USB from a .zip archive containing a single .img file, such as those commonly used to distribute Raspberry Pi images↗.
Very cool!
`,description:"",url:"https://runtimeterror.dev/burn-an-iso-to-usb-with-the-chromebook-recovery-utility/"},"https://runtimeterror.dev/auto-connect-to-protonvpn-on-untrusted-wifi-with-tasker/":{title:"Auto-connect to ProtonVPN on untrusted WiFi with Tasker [Update!]",tags:["android","automation","tasker","vpn"],content:`[Update 2021-03-12] This solution recently stopped working for me. While looking for a fix, I found that OpenVPN had published some notes↗ on controlling the official OpenVPN Connect app↗ from Tasker. Jump to the Update below to learn how I adapted my setup with this new knowledge.
I recently shared how I use Tasker and Home Assistant to keep my phone from charging past 80%. Today, I'm going to share the setup I use to automatically connect my phone to a VPN on networks I don't control.
Background Android has an option to set a VPN as Always-On↗ so for maximum security I could just use that. I'm not overly concerned (yet?) with my internet traffic being intercepted upstream of my ISP, though, and often need to connect to other devices on my home network without passing through a VPN (or introducing split-tunnel complexity). But I do want to be sure that my traffic is protected whenever I'm connected to a WiFi network controlled by someone else.
I've recently started using ProtonVPN↗ in conjunction with my paid ProtonMail account so these instructions are tailored to that particular VPN provider. I'm paying for the ProtonVPN Plus subscription but these instructions should also work for the free tier↗ as well. (And this should work for any VPN which provides an OpenVPN config file - you'll just have to find that on your own.)
ProtonVPN does provide a quite excellent Android app↗ but I couldn't find a way to automate it without root. (If your phone is rooted, you should be able to use a Tasker shell to run cmd statusbar click-tile ch.protonvpn.android/com.protonvpn.android.components.QuickTileService and avoid needing to use OpenVPN at all.)
The apps You'll need a few apps to make this work:
Tasker↗ OpenVPN for Android↗ OpenVpn Tasker Plugin↗ It's important to use the open-source↗ 'OpenVPN for Android' app by Arne Schwabe rather than the 'OpenVPN Connect' app as the latter doesn't work with the Tasker plugin that's what I used when I originally wrote this guide.
OpenVPN config file You can find instructions for configuring the OpenVPN client to work with ProtonVPN here↗ but I'll go ahead and hit the highlights. You'll probably want to go ahead and do all this from your phone so you don't have to fuss with transferring files around, but hey, you do you.
Log in to your ProtonVPN account (or sign up for a new free one) at https://account.protonvpn.com/login↗. Use the panel on the left side to navigate to Downloads &gt; OpenVPN configuration files↗. Select the Android platform and UDP as the protocol, unless you have a particular reason to use TCP↗. Select and download the desired config file: Secure Core configs utilize the Secure Core↗ feature which connects you to a VPN node in your target country by way of a Proton-owned-and-managed server in privacy-friendly Iceland, Sweden, or Switzerland Country configs connect to a random VPN node in your target country Standard server configs let you choose the specific VPN node to use Free server configs connect you to one of the VPN nodes available in the free tier Feel free to download more than one if you'd like to have different profiles available within the OpenVPN app.
ProtonVPN automatically generates a set of user credentials to use with a third-party VPN client so that you don't have to share your personal creds. You'll want to make a note of that randomly-generated username and password so you can plug them in to the OpenVPN app later. You can find the details at Account &gt; OpenVPN / IKEv2 username↗.
Now that you've got the profile file, skip on down to The Update to import it into OpenVPN Connect.
Configuring OpenVPN for Android Now what you've got the config file(s) and your client credentials, it's time to actually configure that client.
Launch the OpenVPN for Android app and tap the little 'downvote-in-a-box' &quot;Import&quot; icon. Browse to wherever you saved the .ovpn config files and select the one you'd like to use. You can rename if it you'd like but I feel that us.protonvpn.com.udp is pretty self-explanatory and will do just fine to distinguish between my profiles. Tap the check mark at the top-right or the floppy icon at the bottom right to confirm the import. Now tap the pencil icon next to the new entry to edit its settings, and paste in the OpenVPN username and password where appropriate. Use your phone's back button/gesture to save the config and return to the list. Repeat for any other configurations you'd like to import. We'll only use one for this particular Tasker profile but you might come up with different needs for different scenarios. And finally, tap on the config name to test the connection. The OpenVPN Log window will appear, and you want the line at the top to (eventually) display something like Connected: SUCCESS. Success!
I don't like to have a bunch of persistent notification icons hanging around (and Android already shows a persistent status icon when a VPN connection is active). If you're like me, long-press the OpenVPN notification and tap the gear icon. Then tap on the Connection statistics category and activate the Minimized slider. The notification will still appear, but it will collapse to the bottom of your notification stack and you won't get bugged by the icon.
Tasker profiles Open up Tasker and get ready to automate! We're going to wind up with at least two new Tasker profiles so (depending on how many you already have) you might want to create a new project by long-pressing the Home icon at the bottom-left of the screen and selecting the Add option. I chose to group all my VPN-related profiles in a project named (oh-so-creatively) &quot;VPN&quot;. Totally your call though.
Let's start with a profile to track whether or not we're connected to one of our preferred/trusted WiFi networks:
Trusted WiFi Tap the '+' sign to create a new profile, and add a new State &gt; Net &gt; Wifi Connected context. This profile will become active whenever your phone connects to WiFi. Tap the magnifying glass next to the SSID field, which will pop up a list of all detected nearby network identifiers. Tap to select whichever network(s) you'd like to be considered &quot;safe&quot;. You can also manually enter the SSID names, separating multiple options with a / (ex, FBI Surveillance Van/TellMyWifiLoveHer/Pretty fly for a WiFi). Or, for more security, identify the networks based on the MACs instead of the SSIDs - just be sure to capture the MACs for any extenders or mesh nodes too! Once you've got your networks added, tap the back button to move forward to the next task (Ah, Android!): configuring the action which will occur when the context is satisfied. Tap the New Task option and then tap the check mark to skip giving it a name (no need). Hit the '+' button to add an action and select Variables &gt; Variable Set. For Name, enter %TRUSTED_WIFI (all caps to make it a &quot;public&quot; variable), and for the To field just enter 1. Hit back to save the action, and back again to save the profile. Back at the profile list, long-press on the Variable Set... action and then select Add Exit Task. We want to un-set the variable when no longer connected to a trusted WiFi network so add a new Variables &gt; Variable Clear action and set the name to %TRUSTED_WIFI. And back back out to admire your handiwork. Here's a recap of the profile: Profile: Trusted Wifi State: Wifi Connected [ SSID:FBI Surveillance Van/TellMyWifiLoveHer/Pretty fly for a WiFi MAC:* IP:* Active:Any ] Enter: Anon A1: Variable Set [ Name:%TRUSTED_WIFI To:1 Recurse Variables:Off Do Maths:Off Append:Off Max Rounding Digits:0 ] Exit: Anon A1: Variable Clear [ Name:%TRUSTED_WIFI Pattern Matching:Off Local Variables Only:Off Clear All Variables:Off ] Onward!
VPN on Strange WiFi This profile will kick in if the phone connects to a WiFi network which isn't on the &quot;approved&quot; list - when the %TRUSTED_WIFI variable is not set.
It starts out the same way by creating a new profile with the State &gt; Net &gt; Wifi Connected context but this time don't add any network names to the list. For the action, select Plugin &gt; OpenVpn Tasker Plugin, tap the pencil icon to edit the configuration, and select your VPN profile from the list under Connect using profile Back at the Action Edit screen, tap the checkbox next to If and enter the variable name %TRUSTED_WIFI. Tap the '~' button to change the condition operator to Isn't Set. So while this profile will activate every time you connect to WiFi, the action which connects to the VPN will only fire if the WiFi isn't a trusted network. Back out to the profile list and add a new Exit Task. Add another Plugin &gt; OpenVpn Tasker Plugin task and this time configure it to Disconnect VPN. To recap:
Profile: VPN on Strange Wifi State: Wifi Connected [ SSID:* MAC:* IP:* Active:Any ] Enter: Anon A1: OpenVPN [ Configuration:Connect (us.protonvpn.com.udp) Timeout (Seconds):0 ] If [ %TRUSTED_WIFI !Set ] Exit: Anon A1: OpenVPN [ Configuration:Disconnect Timeout (Seconds):0 ] Conclusion Give it a try - the VPN should automatically activate the next time you connect to a network that's not on your list. If you find that it's not working correctly, you might try adding a short 3-5 second Task &gt; Wait action before the connect/disconnect actions just to give a brief cooldown between state changes.
Epilogue: working with Google's VPN My Google Pixel 5 has a neat option at Settings &gt; Network &amp; internet &gt; Wi-Fi &gt; Wi-Fi preferences &gt; Connect to public networks which will automatically connect the phone to known-decent public WiFi networks and automatically tunnel the connection through a Google VPN. It doesn't provide quite as much privacy as ProtonVPN, of course, but it's enough to keep my traffic safe from prying eyes on those public networks, and the auto-connection option really comes in handy sometimes. Of course, my Tasker setup would see that I'm connected to an unknown network and try to connect to ProtonVPN at the same time the phone was trying to connect to the Google VPN. That wasn't ideal.
I came up with a workaround to treat any network with the Google VPN as &quot;trusted&quot; as long as that VPN was active. I inserted a 10-second Wait before the Connect and Disconnect actions to give the VPN time to stand up, and added two new profiles to detect the Google VPN connection and disconnection.
Google VPN On This one uses an Event &gt; System &gt; Logcat Entry. The first time you try to use that you'll be prompted to use adb to grant Tasker the READ_LOGS permission but the app actually does a great job of walking you through that setup. We'll watch the Vpn component and filter for Established by com.google.android.apps.gcs on tun0, and then set the %TRUSTED_WIFI variable:
Profile: Google VPN On Event: Logcat Entry [ Output Variables:* Component:Vpn Filter:Established by com.google.android.apps.gcs on tun0 Grep Filter (Check Help):Off ] Enter: Anon A1: Variable Set [ Name:%TRUSTED_WIFI To:1 Recurse Variables:Off Do Maths:Off Append:Off Max Rounding Digits:3 ] Google VPN Off This one is pretty much the same but the opposite:
Profile: Google VPN Off Event: Logcat Entry [ Output Variables:* Component:Vpn Filter:setting state=DISCONNECTED, reason=agentDisconnect Grep Filter (Check Help):Off ] Enter: Anon A1: Variable Clear [ Name:%TRUSTED_WIFI Pattern Matching:Off Local Variables Only:Off Clear All Variables:Off ] Update OpenVPN Connect app configuration After installing and launching the official OpenVPN Connect app↗, tap the &quot;+&quot; button at the bottom right to create a new profile. Swipe over to the &quot;File&quot; tab and import the *.ovpn file you downloaded from ProtonVPN. Paste in the username, tick the &quot;Save password&quot; box, and paste in the password as well. I also chose to rename the profile to something a little bit more memorable - you'll need this name later. From there, hit the &quot;Add&quot; button and then go ahead and tap on your profile to test the connection.
Tasker profiles Go ahead and create the Trusted Wifi profile as described above.
The condition for the VPN on Strange Wifi profile will be the same, but the task will be different. This time, add a System &gt; Send Intent action. You'll need to enter the following details, leaving the other fields blank/default:
Action: net.openvpn.openvpn.CONNECT Cat: None Extra: net.openvpn.openvpn.AUTOSTART_PROFILE_NAME:PC us.protonvpn.com.udp (replace with your profile name) Extra: net.openvpn.openvpn.AUTOCONNECT:true Extra: net.openvpn.openvpn.APP_SECTION:PC Package: net.openvpn.openvpn Class: net.openvpn.unified.MainActivity Target: Activity If: %TRUSTED_WIFI !Set The Exit Task to disconnect from the VPN uses a similar intent:
Action: net.openvpn.openvpn.DISCONNECT Cat: None Extra: net.openvpn.openvpn.STOP:true Package: net.openvpn.openvpn Class: net.openvpn.unified.MainActivity Target: Activity All set! You can pop back up to the Epilogue section to continue tweaking to avoid conflicts with Google's auto-connect VPN if you'd like.
`,description:"",url:"https://runtimeterror.dev/auto-connect-to-protonvpn-on-untrusted-wifi-with-tasker/"},"https://runtimeterror.dev/safeguard-your-androids-battery-with-tasker-home-assistant/":{title:"Safeguard your Android's battery with Tasker + Home Assistant",tags:["android","tasker","automation","homeassistant"],content:`A few months ago, I started using the AccuBattery app↗ to keep a closer eye on how I'd been charging my phones. The app has a handy feature that notifies you once the battery level reaches a certain threshold so you can pull the phone off the charger and extend the lithium battery's service life, and it even offers an estimate for what that impact might be. For instance, right now the app indicates that charging my Pixel 5 from 51% to 100% would cause 0.92 wear cycles, while stopping the charge at 80% would impose just 0.17 cycles.
But that depends on me being near my phone and conscious so I can take action when the notification goes off. That's often a big assumption to make - and, frankly, I'm lazy.
I'm fortunately also fairly crafty, so I came up with a way to combine my favorite Android automation app with my chosen home automation platform to take my laziness out of the picture.
The Ingredients Wemo Mini Smart Plug↗ Raspberry Pi 3↗ with Home Assistant↗ installed Tasker↗ Home Assistant Plug-In for Tasker↗ I'm not going to go through how to install Home Assistant on the Pi or how to configure it beyond what's strictly necessary for this particular recipe. The official getting started documentation↗ is a great place to start.
The Recipe Plug the Wemo into a wall outlet, and plug a phone charger into the Wemo. Add the Belkin Wemo integration in Home Assistant, and configure the device and entity. I named mine switchy. Make a note of the Entity ID: switch.switchy. We'll need that later. Either point your phone's browser to your Home Assistant instance's local URL↗, or use the Home Assistant app↗ to access it. Tap your username at the bottom of the menu and scroll all the way down to the Long-Lived Access Tokens section. Tap to create a new token. It doesn't matter what you name it, but be sure to copy to token data once it is generated since you won't be able to display it again. Install the Home Assistant Plug-In for Tasker↗. Open Tasker, create a new Task called 'ChargeOff', and set the action to Plugin &gt; Home Assistant Plug-in for Tasker &gt; Call Service. Tap the pencil icon to edit the configuration, and then tap the plus sign to add a new server. Give it whatever name you like, and then enter your Home Assistant's IP address for the Base URL, followed by the port number 8123. For example, http://192.168.1.99:8123. Paste in the Long-Lived Access Token you generated earlier. Go on and hit the Test Server button to make sure you got it right. It'll wind up looking something like this: For the Service field, you need to tell HA what you want it to do. We want it to turn off a switch so enter switch.turn_off. We'll use the Service Data field to tell it which switch, in JSON format: {&#34;entity_id&#34;: &#34;switch.switchy&#34;} Tap Test Service to make sure it works - and verify that the switch does indeed turn off. 4. Hard part is over. Now we just need to set up a profile in Tasker to fire our new task. I named mine 'Charge Limiter'. I started with State &gt; Power &gt; Battery Level and set it to trigger between 81-100%., and also added State &gt; Power &gt; Source: Any so it will only be active while charging. I also only want this to trigger while my phone is charging at home, so I added State &gt; Net &gt; Wifi Connected and then specified my home SSID. Link this profile to the Task you created earlier, and never worry about overcharging your phone again. You can use a similar Task to turn the switch back on at a set time - or you could configure that automation directly in Home Assistant. I added an action to turn on the switch to my Google Assistant bedtime routine and that works quite well for my needs.
`,description:"",url:"https://runtimeterror.dev/safeguard-your-androids-battery-with-tasker-home-assistant/"},"https://runtimeterror.dev/showdown-lenovo-chromebook-duet-vs-google-pixel-slate/":{title:"Showdown: Lenovo Chromebook Duet vs. Google Pixel Slate",tags:["chromeos"],content:`Okay, okay, this isn't actually going to be a comparison review between the two wildly-mismatched-but-also-kind-of-similar Chromeblets↗, but rather a (hopefully) brief summary of my experience moving from an $800 Pixel Slate + $200 Google keyboard to a Lenovo Chromebook Duet I picked up on sale for just $200.
Background Up until last week, I'd been using the Slate as my primary personal computing device for the previous 20 months or so, mainly in laptop mode (as opposed to tablet mode). I do a lot of casual web browsing, and I spend a significant portion of my free time helping other users on Google's product support forums as a part of the Google Product Experts program↗. I also work a lot with the Chrome OS Linux (Beta) environment, but I avoid Android apps as much as I can. And I also used the Slate for a bit of Stadia gaming when I wasn't near a Chromecast.
So the laptop experience is generally more important to me than the tablet one. I need to be able to work with a large number of browser tabs, but I don't typically need to do any heavy processing directly on the computer.
I was pretty happy with the Slate, but its expensive keyboard stopped working recently and replacements aren't really available anywhere. Remember, laptop mode is key for my use case so the Pixel Slate became instantly unusable to me.
Size When you put these machines side by side, the first difference that jumps out is the size disparity. The 12.3&quot; Pixel Slate is positively massive next to the 10.1&quot; Lenovo Duet. The Duet is physically smaller so the display itself is of course smaller. I had a brief moment of panic when I first logged in and the setup wizard completely filled the screen. Dialing Chrome OS's display scaling down to 80% strikes a good balance for me between fonts being legible while still displaying enough content to be worthwhile. It can get a bit tight when you've got windows docked side-by-side but I'm getting by okay.
Of course, the smaller size of the Duet also makes it work better as a tablet in my mind. It's comfortable enough to hold with one hand while you interact with the other, whereas the Slate always felt a little too big for that to me. Keyboard A far more impactful size difference is the keyboards though. The Duet keyboard gets a bit cramped, particularly over toward the right side (you know, those pesky braces and semicolons that are never needed when coding): Getting used to typing on this significantly smaller keyboard has been the biggest adjustment so far. The pad on my pinky finger is wider than the last few keys at the right edge of the keyboard so I've struggled with accurately hitting the correct [ or ], and also with smacking Return (and inevitably sending a malformed chat message) when trying to insert an apostrophe. I feel like I'm slowly getting the hang of it, but like I said, it's been an adjustment.
Cover The Pixel Slate's keyboard + folio cover is a single (floppy) piece. The keyboard connects to contacts on the bottom edge of the Slate, and magnets hold it in place. The rear cover then folds and sticks to the back of the Slate with magnets to prop up the tablet in different angles. The magnet setup means you can smoothly transition it through varying levels of tilt, which is pretty nice. But being a single piece means the keyboard might get in the way if you're trying to use it as just a propped-up tablet. And the extra folding in the back takes up a bit of space so the Slate may not work well as a laptop on your actual lap.
The Duet's rear cover has a fabric finish kind of similar to the cases Google offers for their phones, and it provides a great texture for holding the tablet. It sticks to the back of the Duet through the magic of magnets, and the lower half of it folds out to create a really sturdy kickstand. And it's completely separate from the keyboard which is great for when you're using the Duet as a tablet (either handheld or propped up for watching a movie or gaming with Stadia).
And this little kickstand can go low, much lower than the Slate. This makes it perfect for my late-night Stadia sessions while sitting in bed. I definitely prefer this approach compared to what Google did with the Pixel Slate.
Performance The Duet does struggle a bit here. It's basically got a smartphone processor↗ and half the RAM of the Slate. Switching between windows and tabs sometimes takes an extra moment or two to catch up (particularly if said tab has been silently suspended in the background). Similarly, working with Linux apps is just a bit slower than you'd like it to be. Still, I've spent a bit more than a week now with the Duet as my go-to computer and it's never really been slow enough to bother me.
That arm64 processor does make finding compatible Linux packages a little more difficult than it's been on amd64 architectures but a little bit of digging will get past that limitation in most cases.
The upside of that smartphone processor is that the battery life is insane. After about seven hours of light usage today I'm sitting at 63% - with an estimated nine hours remaining. This thing keeps going and going, even while Stadia-ing for hours. Being able to play Far Cry 5 without being tethered to a wall is so nice.
Fingerprint sensor The Duet doesn't have one, and that makes me sad.
Conclusion The Lenovo Chromebook Duet is an incredible little Chromeblet for the price. It clearly can't compete with the significantly-more-expensive Google Pixel Slate on paper, but I'm finding its size to be fantastic for the sort of on-the-go computing I do a lot of. It works better as a lap-top laptop, works better as a tablet in my hand, works better for playing Stadia in bed, and just feels more portable. It's a little sluggish at times and the squished keyboard takes some getting used to but overall I'm pretty happy with this move.
`,description:"",url:"https://runtimeterror.dev/showdown-lenovo-chromebook-duet-vs-google-pixel-slate/"},"https://runtimeterror.dev/setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications/":{title:"Setting up Linux on a new Lenovo Chromebook Duet (bonus arm64 complications!)",tags:["chromeos","linux","crostini","docker","shell","containers"],content:`I've written in the past about the Linux setup I've been using on my Pixel Slate. My Slate's keyboard stopped working over the weekend, though, and there don't seem to be any replacements (either Google or Brydge) to be found. And then I saw that Walmart had the 64GB Lenovo Chromebook Duet temporarily marked down to a mere $200 - just slightly more than the Slate's keyboard originally cost. So I jumped on that deal, and the little Chromeblet showed up today.
I'll be putting the Duet through the paces in the coming days to see if/how it can replace my now-tablet-only Slate, but first things first: I need Linux. And this may be a little bit different than the setup on the Slate since the Duet's Mediatek processor uses the aarch64/arm64 architecture instead of amd64. (And while I'm writing these steps specific to the Duet, the same steps should work on basically any arm64 Chromebook.)
So journey with me as I get this little guy set up!
Installing Linux This part is dead simple. Just head into Settings &gt; Linux (Beta) and hit the Turn on button: Click Next, review the options for username and initial disk size (which can be easily increased later so there's no real need to change it right now), and then select Install: It takes just a few minutes to download and initialize the termina VM and then create the default penguin container: You're ready to roll once the Terminal opens and gives you a prompt: Your first action should be to go ahead and install any patches:
sudo apt update # [tl! .cmd:1] sudo apt upgrade Zsh, Oh My Zsh, and powerlevel10k theme I've been really getting into this shell setup recently so let's go on and make things comfortable before we move on too much further. Getting zsh is straight forward:
sudo apt install zsh # [tl! .cmd] Go ahead and launch zsh (by typing 'zsh') and go through the initial setup wizard to configure preferences for things like history, completion, and other settings. I leave history on the defaults, enable the default completion options, switch the command-line editor to vi-style, and enable both autocd and appendhistory. Once you're back at the (new) penguin% prompt we can move on to installing the Oh My Zsh plugin framework↗.
Just grab the installer script like so:
wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh # [tl! .cmd] Review it if you'd like (and you should! Always review code before running it!!), and then execute it:
sh install.sh # [tl! .cmd] When asked if you'd like to change your default shell to zsh now, say no. This is because it will prompt for your password, but you probably don't have a password set on your brand-new Linux (Beta) account and that just makes things complicated. We'll clear this up later, but for now just check out that slick new prompt: Oh My Zsh is pretty handy because you can easily enable additional plugins↗ to make your prompt behave exactly the way you want it to. Let's spruce it up even more with the powerlevel10k theme↗!
git clone --depth=1 https://github.com/romkatv/powerlevel10k.git \\ # [tl! .cmd] \${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k Now we just need to edit ~/.zshrc to point to the new theme:
sed -i s/^ZSH_THEME=.\\*$/ZSH_THEME=&#39;&#34;powerlevel10k\\/powerlevel10k&#34;&#39;/ ~/.zshrc # [tl! .cmd] We'll need to launch another instance of zsh for the theme change to take effect so first lets go ahead and manually set zsh as our default shell. We can use sudo to get around the whole &quot;don't have a password set&quot; inconvenience:
sudo chsh -s /bin/zsh [username] # [tl! .cmd] Now close out the terminal and open it again, and you should be met by the powerlevel10k configurator which will walk you through getting things set up: This theme is crazy-configurable, but fortunately the configurator wizard does a great job of helping you choose the options that work best for you. I pick the Classic prompt style, Unicode character set, Dark prompt color, 24-hour time, Angled separators, Sharp prompt heads, Flat prompt tails, 2-line prompt height, Dotted prompt connection, Right prompt frame, Sparse prompt spacing, Fluent prompt flow, Enabled transient prompt, Verbose instant prompt, and (finally) Yes to apply the changes. Looking good!
Visual Studio Code I'll need to do some light development work so VS Code is next on the hit list. You can grab the installer here↗ or just copy/paste the following to stay in the Terminal. Definitely be sure to get the arm64 version!
curl -L https://aka.ms/linux-arm64-deb &gt; code_arm64.deb # [tl! .cmd:1] sudo apt install ./code_arm64.deb VS Code should automatically appear in the Chromebook's Launcher, or you can use it to open a file directly with code [filename]: Nice!
Android platform tools (adb and fastboot) I sometimes don't want to wait for my Pixel to get updated naturally, so I love using adb sideload to manually update my phones. Here's what it takes to set that up. Installing adb is as simple as sudo apt install adb. To use it, enable the USB Debugging Developer Option on your phone, and then connect the phone to the Chromebook. You'll get a prompt to connect the phone to Linux: Once you connect the phone to Linux, check the phone to approve the debugging connection. You can then issue adb devices to verify the phone is connected: I've since realized that the platform-tools (adb/fastboot) available in the repos are much older than what are required for flashing a factory image or sideloading an OTA image to a modern Pixel phone. This'll do fine for installing APKs either to your Chromebook or your phone, but I had to pull out my trusty Pixelbook to flash GrapheneOS to my Pixel 4a.
Microsoft PowerShell and VMware PowerCLI [Updated 5/20/2021 with Microsoft's newer instructions] I'm working on setting up a VMware homelab on an Intel NUC 9 so being able to automate things with PowerCLI will be handy.
PowerShell for ARM is still in an early stage so while it is supported↗ it must be installed manually. Microsoft has instructions for installing PowerShell from binary archives here↗, and I grabbed the latest -linux-arm64.tar.gz release I could find here↗.
curl -L -o /tmp/powershell.tar.gz https://github.com/PowerShell/PowerShell/releases/download/v7.2.0-preview.5/powershell-7.2.0-preview.5-linux-arm64.tar.gz # [tl! .cmd:4] sudo mkdir -p /opt/microsoft/powershell/7 sudo tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7 sudo chmod +x /opt/microsoft/powershell/7/pwsh sudo ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh You can then just run pwsh: That was the hard part. To install PowerCLI into your new Powershell environment, just run Install-Module -Name VMware.PowerCLI at the PS &gt; prompt, and accept the warning about installing a module from an untrusted repository.
I'm planning to use PowerCLI against my homelab without trusted SSL certificates so (note to self) I need to run Set-PowerCLIConfiguration -InvalidCertificateAction Ignore before I try to connect. Woot!
Docker The Linux (Beta) environment consists of a hardened virtual machine (named termina) running an LXC Debian container (named penguin). Know what would be even more fun? Let's run some other containers inside our container!
The docker installation has a few prerequisites:
sudo apt install \\ # [tl! .cmd] apt-transport-https \\ ca-certificates \\ curl \\ gnupg-agent \\ software-properties-common Then we need to grab the Docker repo key:
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - # [tl! .cmd] And then we can add the repo:
sudo add-apt-repository \\ # [tl! .cmd] &#34;deb [arch=arm64] https://download.docker.com/linux/debian \\ $(lsb_release -cs) \\ stable&#34; And finally update the package cache and install docker and its friends:
sudo apt update # [tl! .cmd:1] sudo apt install docker-ce docker-ce-cli containerd.io Xzibit would be proud!
3D printing utilities Just like last time, I'll want to be sure I can do light 3D part design and slicing on this Chromebook. Once again, I can install FreeCAD with sudo apt install freecad, and this time I didn't have to implement any workarounds for graphical issues: Unfortunately, though, I haven't found a slicer application compiled with support for aarch64/arm64. There's a much older version of Cura available in the default Debian repos but it crashes upon launch. Neither Cura nor PrusaSlicer (or the Slic3r upstream) offer arm64 releases.
So while I can use the Duet for designing 3D models, I won't be able to actually prepare those models for printing without using another device. I'll need to keep looking for another solution here. (If you know of an option I've missed, please let me know!)
Jupyter Notebook I came across a Reddit post↗ today describing how to install conda and get a Jupyter Notebook running on arm64 so I had to give it a try. It actually wasn't that bad!
The key is to grab the appropriate version of conda Miniforge↗, make it executable, and run the installer:
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh # [tl! .cmd:2] chmod +x Miniforge3-Linux-aarch64.sh ./Miniforge3-Linux-aarch64.sh Exit the terminal and relaunch it, and then install Jupyter:
conda install -c conda-forge notebook # [tl! .cmd] You can then launch the notebook with jupyter notebook and it will automatically open up in a Chrome OS browser tab:
Cool! Now I just need to learn what I'm doing with Jupyter - but at least I don't have an excuse about &quot;my laptop won't run it&quot;.
Wrap-up I'm sure I'll be installing a few more utilities in the coming days but this covers most of my immediate must-have Linux needs. I'm eager to see how this little Chromeblet does now that I'm settled in.
`,description:"",url:"https://runtimeterror.dev/setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications/"},"https://runtimeterror.dev/fixing-wsl2-connectivity-when-connected-to-a-vpn-with-wsl-vpnkit/":{title:"Fixing WSL2 connectivity when connected to a VPN with wsl-vpnkit",tags:["windows","linux","wsl","vpn"],content:`I was pretty excited to get WSL2 and Docker working on my Windows 10 1909 laptop a few weeks ago, but I quickly encountered a problem: WSL2 had no network connectivity when connected to my work VPN.
Well, that's not entirely true; Docker worked just fine, but nothing else could talk to anything outside of the WSL environment. I found a few open issues for this problem in the WSL2 Github↗ with suggested workarounds including modifying Windows registry entries, adjusting the metrics assigned to various virtual network interfaces within Windows, and manually setting DNS servers in /etc/resolv.conf. None of these worked for me.
I eventually came across a solution here↗ which did the trick. This takes advantage of the fact that Docker for Windows is already utilizing vpnkit for connectivity - so you may also want to be sure Docker Desktop is configured to start at login.
The instructions worked well for me so I won't rehash them all here. When it came time to modify my /etc/resolv.conf file, I added in two of the internal DNS servers followed by the IP for my home router's DNS service. This allows me to use WSL2 both on and off the corporate network without having to reconfigure things.
All I need to do now is execute sudo ./wsl-vpnkit and leave that running in the background when I need to use WSL while connected to the corporate VPN.
Whew! Okay, back to work.
`,description:"",url:"https://runtimeterror.dev/fixing-wsl2-connectivity-when-connected-to-a-vpn-with-wsl-vpnkit/"},"https://runtimeterror.dev/abusing-chromes-custom-search-engines-for-fun-and-profit/":{title:"Abusing Chrome's Custom Search Engines for Fun and Profit",tags:["chrome"],content:`Do you (like me) find yourself frequently searching for information within the same websites over and over? Wouldn't it be great if you could just type your query into your browser's address bar (AKA the Chrome Omnibox) and go straight to the results you need? Well you totally can - and probably already are for certain sites which have inserted themselves as search engines.
The basics Point your browser to chrome://settings/searchEngines to see which sites are registered as Custom Search Engines: Each of these search engine entries has three parts: a name (&quot;Search engine&quot;), a Keyword, and a Query URL. The &quot;Search engine&quot; title is just what will appear in the Omnibox when the search engine gets triggered, the Keyword is what you'll type in the Omnibox to trigger it, and the Query URL tells Chrome how to handle the search. All you have to do is type the keyword, hit your Tab key to activate the search, input your query, and hit Enter: For sites which register themselves automatically, the keyword is often set to something like domain.tld so it might make sense to assign it as something shorter or more descriptive.
The Query URL is basically just what appears in the address bar when you search the site directly, with %s placed where your query text would normally go. You can view these details for a given search entry by tapping the three-dot menu button and selecting &quot;Edit&quot;, and you can manually create new entries by hitting that big friendly &quot;Add&quot; button: By searching the site directly, you might find that it supports additional search filters which get appended to the URL: You can add those filters to the Query URL to further customize your Custom Search Engine: I spend a lot of my free time helping out on Google's support forums as a part of their Product Experts program↗, and I often need to quickly look up a Help Center article or previous forum discussion to assist users. I created a set of Custom Search Engines to make that easier: Creating search where there is none Even if the site doesn't have a built-in native search, you can leverage Google's sitesearch operator to create one. I often want to look up a Linux command's man page, so I use this Query URL to search https://www.man7.org/linux/man-pages/↗:
http://google.com/search?q=%s&amp;sitesearch=man7.org%2Flinux%2Fman-pages Speak foreign to me This works for pretty much any site which parses the URL to render certain content. I use this for getting words/phrases instantly translated: Shorter shortcuts Your Query URL doesn't even need to include a query at all! You can use the Custom Search Engines as a sort of hyper-fast shortcut to pages you visit frequently. If I create a new entry with the Keyword searchax and abusing-chromes-custom-search-engines-for-fun-and-profit as the query URL, I can quickly open to this page by typing searchax[tab][enter]: I use that trick pretty regularly for getting back to vCenter appliance management interfaces without having to type out the full FQDN and port number and all that.
Scratchpad hack You can do some other creative stuff too, like speedily accessing a temporary scratchpad for quickly jotting down notes, complete with spellcheck! Just drop this into the Query URL field:
data:text/html;charset=utf-8, &lt;title&gt;Scratchpad&lt;/title&gt;&lt;style&gt;body {padding: 5%; font-size: 1.5em; font-family: Arial; }&#34;&gt;&lt;/style&gt;&lt;link rel=&#34;shortcut icon&#34; href=&#34;https://ssl.gstatic.com/docs/documents/images/kix-favicon6.ico&#34;/&gt;&lt;body OnLoad=&#39;document.body.focus();&#39; contenteditable spellcheck=&#34;true&#34; &gt; And give it a nice short keyword - like the single letter 's': With just a bit of tweaking, you can really supercharge Chrome's Omnibox capabilities. Let me know if you come across any other clever uses for this!
`,description:"",url:"https://runtimeterror.dev/abusing-chromes-custom-search-engines-for-fun-and-profit/"},"https://runtimeterror.dev/docker-on-windows-10-with-wsl2/":{title:"Docker on Windows 10 with WSL2",tags:["docker","windows","wsl","containers"],content:`Microsoft's Windows Subsystem for Linux (WSL) 2 was recently updated↗ to bring support for less-bleeding-edge Windows 10 versions (like 1903 and 1909). WSL2 is a big improvement over the first iteration (particularly with better Docker support↗) so I was really looking forward to getting WSL2 loaded up on my work laptop.
Here's how.
WSL2 Step Zero: Prereqs You'll need Windows 10 1903 build 18362 or newer (on x64). You can check by running ver from a Command Prompt:
ver # [tl! .cmd_pwsh] Microsoft Windows [Version 10.0.18363.1082] # [tl! .nocopy] We're interested in that third set of numbers. 18363 is bigger than 18362 so we're good to go!
Step One: Enable the WSL feature (Not needed if you've already been using WSL1.) You can do this by dropping the following into an elevated Powershell prompt:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart # [tl! .cmd_pwsh] Step Two: Enable the Virtual Machine Platform feature Drop this in an elevated Powershell:
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart # [tl! .cmd_pwsh] And then reboot (this is still Windows, after all).
Step Three: Install the WSL2 kernel update package Download it from here↗, and double-click the downloaded file to install it.
Step Four: Set WSL2 as your default Open a Powershell window and run:
wsl --set-default-version 2 # [tl! .cmd_pwsh] Step Five: Install a Linux distro, or upgrade an existing one If you're brand new to this WSL thing, head over to the Microsoft Store↗ and download your favorite Linux distribution. Once it's installed, launch it and you'll be prompted to set up a Linux username and password.
If you've already got a WSL1 distro installed, first run wsl -l -v in Powershell to make sure you know the distro name:
wsl -l -v # [tl! .cmd_pwsh] NAME STATE VERSION # [tl! .nocopy:1] * Debian Running 2 And then upgrade the distro to WSL2 with wsl --set-version &lt;distro_name&gt; 2:
PS C:\\Users\\jbowdre&gt; wsl --set-version Debian 2 # [tl! .cmd_pwsh] Conversion in progress, this may take a few minutes... # [tl! .nocopy] Cool!
Docker Step One: Download Download Docker Desktop for Windows from here↗, making sure to grab the &quot;Edge&quot; version since it includes support for the backported WSL2 bits.
Step Two: Install Run the installer, and make sure to tick the box for installing the WSL2 engine.
Step Three: Configure Docker Desktop Launch Docker Desktop from the Start menu, and you should be presented with this friendly prompt: Hit that big friendly &quot;gimme WSL2&quot; button. Then open the Docker Settings from the system tray, and make sure that General &gt; Use the WSL 2 based engine is enabled. Now navigate to Resources &gt; WSL Integration, confirm that Enable integration with my default WSL distro is enabled as well. Smash the &quot;Apply &amp; Restart&quot; button if you've made any changes.
Test it! Fire up a WSL session and confirm that everything is working with docker run hello-world: It's beautiful!
`,description:"",url:"https://runtimeterror.dev/docker-on-windows-10-with-wsl2/"},"https://runtimeterror.dev/logging-in-to-multiple-vcenter-servers-at-once-with-powercli/":{title:"Logging in to Multiple vCenter Servers at Once with PowerCLI",tags:["vmware","powercli"],content:`I manage a large VMware environment spanning several individual vCenters, and I often need to run PowerCLI↗ queries across the entire environment. I waste valuable seconds running Connect-ViServer and logging in for each and every vCenter I need to talk to. Wouldn't it be great if I could just log into all of them at once?
I can, and here's how I do it.
The Script The following Powershell script will let you define a list of vCenters to be accessed, securely store your credentials for each vCenter, log in to every vCenter with a single command, and also close the connections when they're no longer needed. It's also a great starting point for any other custom functions you'd like to incorporate into your PowerCLI sessions.
# torchlight! {&#34;lineNumbers&#34;: true} # PowerCLI_Custom_Functions.ps1 # Usage: # 0) Edit $vCenterList to reference the vCenters in your environment. # 1) Call &#39;Update-Credentials&#39; to create/update a ViCredentialStoreItem to securely store your username and password. # 2) Call &#39;Connect-vCenters&#39; to open simultaneously connections to all the vCenters in your environment. # 3) Do PowerCLI things. # 4) Call &#39;Disconnect-vCenters&#39; to cleanly close all ViServer connections because housekeeping. Import-Module VMware.PowerCLI $vCenterList = @(&#34;vcenter1&#34;, &#34;vcenter2&#34;, &#34;vcenter3&#34;, &#34;vcenter4&#34;, &#34;vcenter5&#34;) function Update-Credentials { $newCredential = Get-Credential ForEach ($vCenter in $vCenterList) { New-ViCredentialStoreItem -Host $vCenter -User $newCredential.UserName -Password $newCredential.GetNetworkCredential().password } } function Connect-vCenters { ForEach ($vCenter in $vCenterList) { Connect-ViServer -Server $vCenter } } function Disconnect-vCenters { Disconnect-ViServer -Server * -Force -Confirm:$false } The Setup Edit whatever shortcut you use for launching PowerCLI (I use a tab in Windows Terminal↗ - I'll do another post on that setup later) to reference the custom init script. Here's the commandline I use:
powershell.exe -NoExit -Command &#34;. C:\\Scripts\\PowerCLI_Custom_Functions.ps1&#34; The Usage Now just use that shortcut to open up PowerCLI when you wish to do things. The custom functions will be loaded and waiting for you.
Start by running Update-Credentials. It will prompt you for the username+password needed to log into each vCenter listed in $vCenterList. These can be the same or different accounts, but you will need to enter the credentials for each vCenter since they get stored in a separate ViCredentialStoreItem. You'll also run this function again if you need to change the password(s) in the future. Log in to all the things by running Connect-vCenters. Do your work. When you're finished, be sure to call Disconnect-vCenters so you don't leave sessions open in the background. `,description:"",url:"https://runtimeterror.dev/logging-in-to-multiple-vcenter-servers-at-once-with-powercli/"},"https://runtimeterror.dev/3d-modeling-and-printing-on-chrome-os/":{title:"3D Modeling and Printing on Chrome OS",tags:["linux","chromeos","crostini","3dprinting"],content:`I've got an Ender 3 Pro 3D printer, a Raspberry Pi 4, and a Pixel Slate. I can't interface directly with the printer over USB from the Slate (plus having to be physically connected to things is like so lame) so I installed Octoprint on the Raspberry Pi↗ and connected that to the printer's USB interface. This gave me a pretty web interface for controlling the printer - but it's only accessible over the local network. I also installed The Spaghetti Detective↗ to allow secure remote control of the printer, with the added bonus of using AI magic and a cheap camera to detect and abort failing prints.
That's a pretty sweet setup, but I still needed a way to convert STL 3D models into GCODE files which the printer can actually understand. And what if I want to create my own designs?
Enter &quot;Crostini,&quot; Chrome OS's Linux (Beta) feature↗. It consists of a hardened Linux VM named termina which runs (by default) a Debian Buster LXD container named penguin (though you can spin up just about any container for which you can find an image↗) and some fancy plumbing to let Chrome OS and Linux interact in specific clearly-defined ways. It's a brilliant balance between offering the flexibility of Linux while preserving Chrome OS's industry-leading security posture.
There are plenty of great guides (like this one↗) on how to get started with Linux on Chrome OS so I won't rehash those steps here.
One additional step you will probably want to take is make sure that your Chromebook is configured to enable hyperthreading, as it may have hyperthreading disabled by default↗. Just plug chrome://flags/#scheduler-configuration into Chrome's address bar, set it to Enables Hyper-Threading on relevant CPUs, and then click the button to restart your Chromebook. You'll thank me later. The Software I settled on using FreeCAD↗ for parametric modeling and Ultimaker Cura↗ for my GCODE slicer, but unfortunately getting them working cleanly wasn't entirely straightforward.
FreeCAD Installing FreeCAD is as easy as:
sudo apt update # [tl! .cmd:2] sudo apt install freecad But launching /usr/bin/freecad caused me some weird graphical defects which rendered the application unusable. I found that I needed to pass the LIBGL_DRI3_DISABLE=1 environment variable to eliminate these glitches:
env &#39;LIBGL_DRI3_DISABLE=1&#39; /usr/bin/freecad &amp; # [tl! .cmd] To avoid having to type that every time I wished to launch the app, I inserted this line at the bottom of my ~/.bashrc file:
alias freecad=&#34;env &#39;LIBGL_DRI3_DISABLE=1&#39; /usr/bin/freecad &amp;&#34; To be able to start FreeCAD from the Chrome OS launcher with that environment variable intact, edit it into the Exec line of the /usr/share/applications/freecad.desktop file:
sudo vi /usr/share/applications/freecad.desktop # [tl! .cmd] [Desktop Entry] Version=1.0 Name=FreeCAD Name[de]=FreeCAD Comment=Feature based Parametric Modeler Comment[de]=Feature-basierter parametrischer Modellierer GenericName=CAD Application GenericName[de]=CAD-Anwendung Exec=env LIBGL_DRI3_DISABLE=1 /usr/bin/freecad %F # [tl! focus] Path=/usr/lib/freecad Terminal=false Type=Application Icon=freecad Categories=Graphics;Science;Engineering StartupNotify=true GenericName[de_DE]=Feature-basierter parametrischer Modellierer Comment[de_DE]=Feature-basierter parametrischer Modellierer MimeType=application/x-extension-fcstd That's it! Get on with your 3D-modeling bad self. Now that you've got a model, be sure to export it as an STL mesh↗ so you can import it into your slicer.
Ultimaker Cura Cura isn't available from the default repos so you'll need to download the AppImage from https://github.com/Ultimaker/Cura/releases/tag/4.7.1↗. You can do this in Chrome and then use the built-in File app to move the file into your 'My Files &gt; Linux Files' directory. Feel free to put it in a subfolder if you want to keep things organized - I stash all my AppImages in ~/Applications/.
To be able to actually execute the AppImage you'll need to adjust the permissions with 'chmod +x':
chmod +x ~/Applications/Ultimaker_Cura-4.7.1.AppImage # [tl! .cmd] You can then start up the app by calling the file directly:
~/Applications/Ultimaker_Cura-4.7.1.AppImage &amp; # [tl! .cmd] AppImages don't automatically appear in the Chrome OS launcher so you'll need to create its .desktop file. You can do this manually if you want, but I found it a lot easier to leverage menulibre:
sudo apt update &amp;&amp; sudo apt install menulibre # [tl! .cmd:2] menulibre Just plug in the relevant details (you can grab the appropriate icon here↗), hit the filing cabinet Save icon, and you should then be able to search for Cura from the Chrome OS launcher. From there, just import the STL mesh, configure the appropriate settings, slice, and save the resulting GCODE. You can then just upload the GCODE straight to The Spaghetti Detective and kick off the print.
Nice!
`,description:"",url:"https://runtimeterror.dev/3d-modeling-and-printing-on-chrome-os/"},"https://runtimeterror.dev/finding-the-most-popular-ips-in-a-log-file/":{title:"Finding the most popular IPs in a log file",tags:["linux","shell","logs","regex"],content:`I found myself with a sudden need for parsing a Linux server's logs to figure out which host(s) had been slamming it with an unexpected burst of traffic. Sure, there are proper log analysis tools out there which would undoubtedly make short work of this but none of those were installed on this hardened system. So this is what I came up with.
Find IP-ish strings This will get you all occurrences of things which look vaguely like IPv4 addresses:
grep -o -E &#39;[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}&#39; ACCESS_LOG.TXT # [tl! .cmd] (It's not a perfect IP address regex since it would match things like 987.654.321.555 but it's close enough for my needs.)
Filter out localhost The log likely include a LOT of traffic to/from 127.0.0.1 so let's toss out localhost by piping through grep -v &quot;127.0.0.1&quot; (-v will do an inverse match - only return results which don't match the given expression):
grep -o -E &#39;[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}&#39; ACCESS_LOG.TXT | grep -v &#34;127.0.0.1&#34; # [tl! .cmd] Count up the duplicates Now we need to know how many times each IP shows up in the log. We can do that by passing the output through uniq -c (uniq will filter for unique entries, and the -c flag will return a count of how many times each result appears):
grep -o -E &#39;[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}&#39; ACCESS_LOG.TXT | grep -v &#34;127.0.0.1&#34; | uniq -c # [tl! .cmd] Sort the results We can use sort to sort the results. -n tells it sort based on numeric rather than character values, and -r reverses the list so that the larger numbers appear at the top:
grep -o -E &#39;[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}&#39; ACCESS_LOG.TXT | grep -v &#34;127.0.0.1&#34; | uniq -c | sort -n -r # [tl! .cmd] Top 5 And, finally, let's use head -n 5 to only get the first five results:
grep -o -E &#39;[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}&#39; ACCESS_LOG.TXT | grep -v &#34;127.0.0.1&#34; | uniq -c | sort -n -r | head -n 5 # [tl! .cmd] Bonus round! You know how old log files get rotated and compressed into files like logname.1.gz? I very recently learned that there are versions of the standard Linux text manipulation tools which can work directly on compressed log files, without having to first extract the files. I'd been doing things the hard way for years - no longer, now that I know about zcat, zdiff, zgrep, and zless!
So let's use a for loop to iterate through 20 of those compressed logs, and use date -r [filename] to get the timestamp for each log as we go:
for i in {1..20}; do date -r ACCESS_LOG.$i.gz; zgrep -o -E &#39;[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}&#39; \\ # [tl! .cmd] ACCESS_LOG.log.$i.gz | grep -v &#34;127.0.0.1&#34; | uniq -c | sort -n -r | head -n 5; done Nice!
`,description:"",url:"https://runtimeterror.dev/finding-the-most-popular-ips-in-a-log-file/"},"https://runtimeterror.dev/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance/":{title:"BitWarden password manager self-hosted on free Google Cloud instance",tags:["docker","linux","cloud","gcp","security","selfhosting"],content:`
A friend mentioned the BitWarden↗ password manager to me yesterday and I had to confess that I'd never heard of it. I started researching it and was impressed by what I found: it's free, open-source↗, feature-packed, fully cross-platform (with Windows/Linux/MacOS desktop clients, Android/iOS mobile apps, and browser extensions for Chrome/Firefox/Opera/Safari/Edge/etc), and even offers a self-hosted option.
I wanted to try out the self-hosted setup, and I discovered that the official distribution↗ works beautifully on an n1-standard-1 1-vCPU Google Compute Engine instance - but that would cost me an estimated $25/mo to run after my free Google Cloud Platform trial runs out. And I can't really scale that instance down further because the embedded database won't start with less than 2GB of RAM.
I then came across this comment↗ on Reddit which discussed in somewhat-vague terms the steps required to get BitWarden to run on the free↗ e2-micro instance, and also introduced me to the community-built vaultwarden↗ project which is specifically designed to run a BW-compatible server on resource-constrained hardware. So here are the steps I wound up taking to get this up and running.
bitwarden_rs -&gt; vaultwarden
When I originally wrote this post back in September 2018, the containerized BitWarden solution was called bitwarden_rs. The project has since been renamed↗ to vaultwarden, and I've since moved to the hosted version of BitWarden. I have attempted to update this article to account for the change but have not personally tested this lately. Good luck, dear reader!
Spin up a VM Easier said than done, but head over to https://console.cloud.google.com/↗ and fumble through:
Creating a new project (or just add an instance to an existing one). Creating a new Compute Engine instance, selecting e2-micro for the Machine Type and ticking the Allow HTTPS traffic box. (Optional) Editing the instance to add an ssh-key for easier remote access. Configure Dynamic DNS Because we're cheap and don't want to pay for a static IP.
Log in to the Google Domain admin portal↗ and create a new Dynamic DNS record↗. This will provide a username and password specific for that record. Log in to the GCE instance and run sudo apt-get update followed by sudo apt-get install ddclient. Part of the install process prompts you to configure things... just accept the defaults and move on. Edit the ddclient config file to look like this, substituting the username, password, and FDQN from Google Domains: sudo vim /etc/ddclient.conf # [tl! .cmd] # torchlight! {&#34;lineNumbers&#34;: true} # Configuration file for ddclient generated by debconf # # /etc/ddclient.conf protocol=googledomains, ssl=yes, syslog=yes, use=web, server=domains.google.com, login=&#39;[USERNAME]&#39;, # [tl! highlight:3] password=&#39;[PASSWORD]&#39;, [FQDN] sudo vi /etc/default/ddclient and make sure that run_daemon=&quot;true&quot;: # torchlight! {&#34;lineNumbers&#34;: true} # Configuration for ddclient scripts # generated from debconf on Sat Sep 8 21:58:02 UTC 2018 # # /etc/default/ddclient # Set to &#34;true&#34; if ddclient should be run every time DHCP client (&#39;dhclient&#39; # from package isc-dhcp-client) updates the systems IP address. run_dhclient=&#34;false&#34; # Set to &#34;true&#34; if ddclient should be run every time a new ppp connection is # established. This might be useful, if you are using dial-on-demand. run_ipup=&#34;false&#34; # Set to &#34;true&#34; if ddclient should run in daemon mode [tl! focus:3] # If this is changed to true, run_ipup and run_dhclient must be set to false. run_daemon=&#34;true&#34; # Set the time interval between the updates of the dynamic DNS name in seconds. # This option only takes effect if the ddclient runs in daemon mode. daemon_interval=&#34;300&#34; Restart the ddclient service - twice for good measure (daemon mode only gets activated on the second go because reasons): sudo systemctl restart ddclient # [tl! .cmd:2] sudo systemctl restart ddclient After a few moments, refresh the Google Domains page to verify that your instance's external IP address is showing up on the new DDNS record. Install Docker Steps taken from here↗.
Update apt package index: sudo apt-get update # [tl! .cmd] Install package management prereqs: sudo apt-get install \\ # [tl! .cmd] apt-transport-https \\ ca-certificates \\ curl \\ gnupg2 \\ software-properties-common Add Docker GPG key: curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - # [tl! .cmd] Add the Docker repo: sudo add-apt-repository \\ # [tl! .cmd] &#34;deb [arch=amd64] https://download.docker.com/linux/debian \\ $(lsb_release -cs) \\ stable&#34; Update apt index again: sudo apt-get update # [tl! .cmd] Install Docker: sudo apt-get install docker-ce # [tl! .cmd] Install Certbot and generate SSL cert Steps taken from here↗.
Install Certbot: sudo apt-get install certbot # [tl! .cmd] Generate certificate: sudo certbot certonly --standalone -d \${FQDN} # [tl! .cmd] Create a directory to store the new certificates and copy them there: sudo mkdir -p /ssl/keys/ # [tl! .cmd:3] sudo cp -p /etc/letsencrypt/live/\${FQDN}/fullchain.pem /ssl/keys/ sudo cp -p /etc/letsencrypt/live/\${FQDN}/privkey.pem /ssl/keys/ Set up vaultwarden Using the container image available here↗.
Let's just get it up and running first: sudo docker run -d --name vaultwarden \\ # [tl! .cmd] -e ROCKET_TLS={certs=&#39;&#34;/ssl/fullchain.pem&#34;, key=&#34;/ssl/privkey.pem&#34;}&#39; \\ -e ROCKET_PORT=&#39;8000&#39; \\ -v /ssl/keys/:/ssl/ \\ -v /bw-data/:/data/ \\ -v /icon_cache/ \\ -p 0.0.0.0:443:8000 \\ vaultwarden/server:latest At this point you should be able to point your web browser at https://[FQDN] and see the BitWarden login screen. Click on the Create button and set up a new account. Log in, look around, add some passwords, etc. Everything should basically work just fine. Unless you want to host passwords for all of the Internet you'll probably want to disable signups at some point by adding the env option SIGNUPS_ALLOWED=false. And you'll need to set DOMAIN=https://[FQDN] if you want to use U2F authentication: sudo docker stop vaultwarden # [tl! .cmd:2] sudo docker rm vaultwarden sudo docker run -d --name vaultwarden \\ -e ROCKET_TLS={certs=&#39;&#34;/ssl/fullchain.pem&#34;,key=&#34;/ssl/privkey.pem&#34;&#39;} \\ -e ROCKET_PORT=&#39;8000&#39; \\ -e SIGNUPS_ALLOWED=false \\ -e DOMAIN=https://[FQDN] \\ -v /ssl/keys/:/ssl/ \\ -v /bw-data/:/data/ \\ -v /icon_cache/ \\ -p 0.0.0.0:443:8000 \\ vaultwarden/server:latest Install vaultwarden as a service So we don't have to keep manually firing this thing off.
Create a script at /usr/local/bin/start-vaultwarden.sh to stop, remove, update, and (re)start the vaultwarden container: sudo vim /usr/local/bin/start-vaultwarden.sh # [tl! .cmd] # torchlight! {&#34;lineNumbers&#34;: true} #!/bin/bash docker stop vaultwarden docker rm vaultwarden docker pull vaultwarden/server docker run -d --name vaultwarden \\ -e ROCKET_TLS={certs=&#39;&#34;/ssl/fullchain.pem&#34;,key=&#34;/ssl/privkey.pem&#34;&#39;} \\ -e ROCKET_PORT=&#39;8000&#39; \\ -e SIGNUPS_ALLOWED=false \\ -e DOMAIN=https://\${FQDN} \\ -v /ssl/keys/:/ssl/ \\ -v /bw-data/:/data/ \\ -v /icon_cache/ \\ -p 0.0.0.0:443:8000 \\ vaultwarden/server:latest sudo chmod 744 /usr/local/bin/start-vaultwarden.sh # [tl! .cmd] And add it as a systemd service: sudo vim /etc/systemd/system/vaultwarden.service # [tl! .cmd] [Unit] Description=BitWarden container Requires=docker.service After=docker.service [Service] Restart=always ExecStart=/usr/local/bin/vaultwarden-start.sh # [tl! highlight] ExecStop=/usr/bin/docker stop vaultwarden [Install] WantedBy=default.target sudo chmod 644 /etc/systemd/system/vaultwarden.service # [tl! .cmd] Try it out: sudo systemctl start vaultwarden # [tl! .cmd] sudo systemctl status vaultwarden # [tl! .cmd focus:start] ● bitwarden.service - BitWarden container # [tl! .nocopy:start] Loaded: loaded (/etc/systemd/system/vaultwarden.service; enabled; vendor preset: enabled) Active: deactivating (stop) since Sun 2018-09-09 03:43:20 UTC; 1s ago Process: 13104 ExecStart=/usr/local/bin/bitwarden-start.sh (code=exited, status=0/SUCCESS) # [tl! focus:end] Main PID: 13104 (code=exited, status=0/SUCCESS); Control PID: 13229 (docker) Tasks: 5 (limit: 4915) Memory: 9.7M CPU: 375ms CGroup: /system.slice/vaultwarden.service └─control └─13229 /usr/bin/docker stop vaultwarden Sep 09 03:43:20 vaultwarden vaultwarden-start.sh[13104]: Status: Image is up to date for vaultwarden/server:latest Sep 09 03:43:20 vaultwarden vaultwarden-start.sh[13104]: ace64ca5294eee7e21be764ea1af9e328e944658b4335ce8721b99a33061d645 # [tl! .nocopy:end] Conclusion If all went according to plan, you've now got a highly-secure open-source full-featured cross-platform password manager running on an Always Free Google Compute Engine instance resolved by Google Domains dynamic DNS. Very slick!
`,description:"",url:"https://runtimeterror.dev/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance/"}}</script><script src="/js/lunr.min.js" async=""></script><script src="/js/search.min.js" async=""></script><script src="/js/code-copy-button.min.js" async=""></script><script src="/js/set-theme.min.js"></script></footer></div><div id="back-to-top" class="hidden"><svg viewBox="0 0 24 24"><path d="M7.41 15.41L12 10.83l4.59 4.58L18 14l-6-6-6 6z"></path></svg></div></body></html>