<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Matt Burman]]></title><description><![CDATA[Software Consultant]]></description><link>https://mattburman.com/</link><generator>Ghost 4.6</generator><lastBuildDate>Sun, 19 Apr 2026 08:37:33 GMT</lastBuildDate><atom:link href="https://mattburman.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[A tale of Distributed Context]]></title><description><![CDATA[Observability. It's a Buzzword, but critical for understanding failures across a large organisation of distributed services. Context is critical for debugging across team and service boundaries. This is the story of how we came to implement Distributed Tracing to improve our Observability.]]></description><link>https://mattburman.com/a-tale-of-distributed-context/</link><guid isPermaLink="false">60c0bb1de3598d0001474f2f</guid><category><![CDATA[observability]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[tracing]]></category><category><![CDATA[opentelemetry]]></category><category><![CDATA[squad]]></category><dc:creator><![CDATA[Matt Burman]]></dc:creator><pubDate>Tue, 15 Jun 2021 06:54:06 GMT</pubDate><media:content url="https://mattburman.com/content/images/2021/06/Screen-Shot-2021-06-15-at-07.53.07.png" medium="image"/><content:encoded><![CDATA[<img src="https://mattburman.com/content/images/2021/06/Screen-Shot-2021-06-15-at-07.53.07.png" alt="A tale of Distributed Context"><p>I have been excited about Distributed Tracing for a while. It&apos;s been on my radar for a couple of years. With OpenTelemetry, maturity is growing every day. I was waiting for an opportunity to implement tracing around a busy roadmap as a software engineer, and my chance finally arrived last year. Here is that story!</p><h1 id="what-is-opentelemetry">What is OpenTelemetry?</h1><p>OpenTelemetry is a framework for &quot;Observability&quot;. Now, Observability is a super loaded term. Essentially, it&apos;s about gathering detailed and structured context about your systems for making insights available when you need them most.</p><p>The OpenTelemetry project aims to unify three types of telemetry data - Traces, Metrics, and Logs. It provides standards for instrumenting, generating, exporting and ingesting telemetry data. In the long term, this will reduce fragmentation in the ecosystem. I am particularly excited about the reduction in vendor lock-in. Previously, implementations of tracing have ended up highly coupled to vendors. OpenTelemetry will enable observability companies to focus on insights and business value, not re-inventing data formats, instrumentation, and ingestion. I have found that the tracing capabilities are better than metrics and logging for now, but this should improve over time.</p><h1 id="storytime-how-have-i-used-it">Storytime! How have I used it?</h1><p>I used OpenTelemetry to instrument a system experiencing some unknown issues across two tribes, three squads, and (at least) four services. Firstly, let me set the scene with a bit of background.</p><p>Let&apos;s first name the squads and their relevant services:</p><ul><li>Product - a Frontend in my Tribe.</li><li>My Squad - Cross-product Backend APIs for my Tribe.</li><li>Account - Cross-product OAuth and Legacy Account APIs.</li></ul><p>My squad enables product squads in Gaming to focus on what they do best - building a great product. Ditto, for the Account squad.</p><p>Our backend service facilitates Legacy-&gt;OAuth session transfer and performs OAuth Tokenset cookie maintenance on all requests through HTTP middleware. Our Backend squad was essentially abstracting the Account API calls and maintenance of OAuth sessions to simplify complex Account integrations across multiple Product frontends.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mattburman.com/content/images/2021/06/Untitled-Diagram.svg" class="kg-image" alt="A tale of Distributed Context" loading="lazy" width="285" height="367"><figcaption>A very high overview of the squads and services involved. Arrows are direction of data flow rather than network ingress.</figcaption></figure><h2 id="what-was-the-problem">What was the problem?</h2><p>On login, the Frontend creates two sessions - a &quot;Legacy&quot; and an &quot;OAuth&quot; session. For a small number of customers, the Product squad found that the OAuth TokenSet was not present in cookies leading to some requests failing with unauthorized errors. They expected a valid OAuth Tokenset to be present in cookies from their previous call during login for Legacy-&gt; OAuth session transfer. Despite the session transfer failing, the Frontend &quot;login flow&quot; as a whole was still successful because the Legacy authorization was. The failures meant that some newer features depending on the additional OAuth session were failing. Naturally, the Product squad asked us to help.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mattburman.com/content/images/2021/06/image-9.png" class="kg-image" alt="A tale of Distributed Context" loading="lazy" width="946" height="506" srcset="https://mattburman.com/content/images/size/w600/2021/06/image-9.png 600w, https://mattburman.com/content/images/2021/06/image-9.png 946w" sizes="(min-width: 720px) 720px"><figcaption>The systems involved, generated by NewRelic once Distributed Tracing was implemented. Frontend to Backend to Account services.</figcaption></figure><h2 id="investigation-time">Investigation time</h2><p>As the Product squads integration point, we investigated. We could see from our logs that many of the failures were to do with accounts lacking authorization. We told the Product squad that they were unauthorized requests. &quot;How is that possible?&quot; they asked. Neither squad understood. They insisted it must be for another reason because all API calls were from accounts with a valid Legacy login session.</p><p>We took their word for it and investigated any remaining errors with the session transfer. I improved our logging to include the full context of errors in a single structured log rather than multiple log lines per request. I then set up a Kibana dashboard to sum API call failures by the error message (we use the Elastic stack for log shipping). Using the summary, we could show that, yes, there were still network issues. However, we still insisted to the Product squad that the primary reason for these requests failing was that the customer was unauthorized. &quot;How is that possible?&quot; they asked. &quot;The customer is already logged in!&quot;.</p><p>We took their word for it and began to investigate the network issues. We noticed spikes of timeouts on Kibana graphs. We weren&apos;t exactly sure where this was timing out. It could have been in a few places. Our backend lives in AWS ECS. It makes requests to the Account APIs which are not in AWS - instead hosted in an on-premises data centre (for compliance). These requests go over the internet (fronted by Akamai) or over a Transit VPC to get to the on-premises data centre. From there, it&apos;s through a web-tier ZXTM load balancer, an app-tier F5 load balancer, then onto the applications. The applications themselves are either on a VMware stack or an on-premises Kubernetes cluster requiring routing through the overlay network and the Istio envoys. Quite a few places for failure - and that&apos;s just describing the ingress to the app tier!</p><p>We asked the Account squad if they could see these spikes on their end. They could! We were making some progress - at least our request was getting that far. Excitedly, they worked on fixing the issues. It was DNS. It&apos;s always DNS! There were sporadic issues with CoreDNS availability in the Kubernetes cluster. Their backing services were timing out on DNS lookups. With the addition of some DNS lookup retries, the spikes subsided.</p><p>So we had made some excellent progress! However, the Product squad was still reporting issues. The errors were an even higher percentage of authorization failures now, which backed up the theory of unauthorized users. The Product squad could see that the DNS retry fix had reduced the rate of failures but certainly not eliminated them. We asked the Product squad how some users might not be authorized. &quot;How is that possible?&quot; they asked. &quot;The customers are logged in!&quot;, &quot;They have a legacy session!&quot;.</p><p>We dug a bit deeper. There were still some network issues, albeit at a lower rate. We decided that Distributed Tracing would be insightful. Passing around timestamps so we could each check our respective logging and monitoring tooling for context was getting tiring. A shared view of our requests would be better. We now had a good justification for putting in some time to improve our observability.</p><h1 id="implementing-distributed-tracing">Implementing Distributed Tracing</h1><p>It turns out NewRelic is in use around the business, including the three other services. Very convenient! NewRelic has a suite of products for monitoring applications which we use. Firstly we use the frontend monitoring capabilities. Additionally, some services use their APM agents for backend monitoring. However, I prefer to not couple monitoring to proprietary systems. Fortunately, the business as a whole does not rely on NewRelic for monitoring, with Prometheus with Grafana used for most services. Tracing, however, was lacking. The Account services had NewRelic APM agents with Distributed Tracing enabled, which works for their services. However, our Go applications did not have any Tracing capabilities. There did not seem to be consistent adoption of open standards in use in the business either. If only all services were inside one large service mesh with some automatic instrumentation!</p><p>I discovered that NewRelic APM agents support the <a href="https://newrelic.com/blog/nerdlog/w3c-trace-context-distributed-tracing-standard">W3C Trace Context specification</a>. This standard defines how Trace Context data is persisted throughout the request-response cycle as it propagates from service to service. W3C is also the default Propagator for OpenTelemetry. This default and the support by NewRelic enables their APM agents to Propagate headers between any other OpenTelemetry service. I was super excited about this because I can now ensure compatibility with other systems.</p><p>Instrumentation was easy too! There are instrumentation packages for the HTTP server, request routers, and HTTP clients. For the HTTP clients, httptrace generates spans about DNS lookups, connection establishment, TLS handshakes, and receiving the response. That detail is invaluable for diagnosing network issues. I also found it easy to add custom attributes data to the spans generated by the automatic instrumentation. That&apos;s excellent for adding details about any custom business logic that can not get annotated automatically.</p><p>We also integrated the <a href="https://github.com/newrelic/opentelemetry-exporter-go">NewRelic OpenTelemetry Exporter</a> to ship off the trace data using their ingestion API. Note for future integrations: you can now use an <a href="https://docs.newrelic.com/whats-new/2021/04/native-support-opentelemetry/">OLTP exporter</a> enabling NewRelic ingestion in a standards-compliant way too.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mattburman.com/content/images/2021/06/image-4.png" class="kg-image" alt="A tale of Distributed Context" loading="lazy" width="360" height="346"><figcaption>Our initial integration with basic instrumentation of HTTP requests. httptrace and cross-service spans came later!</figcaption></figure><p>A further promising benefit of OpenTelemetry is that we are less locked in with any particular vendor such as NewRelic. If we decide to use another trace aggregation service, we can do. Swapping out the exporter is all that is needed. That is even simpler with OLTP when it&apos;s just swapping out the endpoint. Multiple exporters could even integrate simultaneously! That could be Honeycomb, Lightstep, Elastic APM, etc. Providers can now compete on their ingestion and aggregation capabilities instead of their ease of integration. Secondly, as the OpenTelemetry ecosystem matures, we may get other benefits &quot;for free&quot;. One example might be replacing our custom Prometheus instrumentation with OpenTelemetry metrics exposed on the metrics endpoint or ingested by Prometheus via the push gateway.</p><p>So now we had Tracing data for our service in NewRelic - awesome! We asked the Product squad to enable Distributed Tracing on their browser agents too. Once they did this, we had distributed tracing with data from four services across squad, service and tribe boundaries. It was satisfying as this was a first for the business.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mattburman.com/content/images/2021/06/image-10.png" class="kg-image" alt="A tale of Distributed Context" loading="lazy" width="1024" height="615" srcset="https://mattburman.com/content/images/size/w600/2021/06/image-10.png 600w, https://mattburman.com/content/images/size/w1000/2021/06/image-10.png 1000w, https://mattburman.com/content/images/2021/06/image-10.png 1024w" sizes="(min-width: 720px) 720px"><figcaption>A distributed trace with detailed spans from four services all powered with different integrations. The span hierarchy isn&apos;t perfect but NewRelic seems to improves the compatibility over time.</figcaption></figure><p>We also added <a href="https://blog.golang.org/http-tracing">httptrace instrumentation</a> which helped us to find further details about some outlier network request failures. We tweaked our Go HTTP client configuration to increase timeouts which reduced the network failures. Now, the network failures were as good as zero compared with the unauthorized users. We told the Product squad that the users are not authorized. &quot;That can&apos;t be!&quot; they said. &quot;They&apos;re logged in to Legacy!&quot;.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://mattburman.com/content/images/2021/06/image-5.png" class="kg-image" alt="A tale of Distributed Context" loading="lazy" width="1021" height="205" srcset="https://mattburman.com/content/images/size/w600/2021/06/image-5.png 600w, https://mattburman.com/content/images/size/w1000/2021/06/image-5.png 1000w, https://mattburman.com/content/images/2021/06/image-5.png 1021w"><figcaption>A heatmap of trace durations. Clearly a few timeouts at 6s.</figcaption></figure><p>This time, we had essentially resolved the network issues. We recorded the error message in a custom attribute on our spans in the distributed trace. The Product squad could now see for themselves, with the full context in their view of NewRelic, that unauthorized users were the cause of most failures.</p><p>Product investigated with the Account squad and discovered an issue with the login flow. It turned out that suspended customers were getting a Legacy session but not authorization for the minimum scopes required for issuing an OAuth TokenSet during the session transfer. It made absolute sense that they could not perform these actions. An additional user journey for suspended customers resolved the remaining errors.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mattburman.com/content/images/2021/06/image-7.png" class="kg-image" alt="A tale of Distributed Context" loading="lazy" width="864" height="355" srcset="https://mattburman.com/content/images/size/w600/2021/06/image-7.png 600w, https://mattburman.com/content/images/2021/06/image-7.png 864w" sizes="(min-width: 720px) 720px"><figcaption>Error levels showing the fix was released!</figcaption></figure><h1 id="summary">Summary</h1><p>Did implementing the Tracing fix our issue? Not exactly. The problem was more in our understanding of the integration prerequisites than the operational interactions between these systems. The main issue was still visible in the logs. However, we did find issues that would have contributed. That said, Tracing gave our squads a shared understanding of the nature of our complex distributed requests and how they can fail. It helped to discover, quantify, and put in context the multiple issues we were experiencing. We eventually narrowed down the root issue, improving our reliability in the process. In the future, with tracing in place, we will be able to drill down to these answers much faster.</p>]]></content:encoded></item><item><title><![CDATA[How I scored 97% in the CKAD exam - Certified Kubernetes Application Developer]]></title><description><![CDATA[I have been using Kubernetes since 2019 for personal use on GKE and using clusters at work. That was NOT enough to pass! I assumed that it would not take much effort to pass this exam. I was wrong. Whilst I had most of the knowledge I needed, I did not realise just how fast you need to be.]]></description><link>https://mattburman.com/how-i-passed-the-ckad-exam/</link><guid isPermaLink="false">60be23298b980a000142b73e</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[skills]]></category><category><![CDATA[containers]]></category><category><![CDATA[ckad]]></category><dc:creator><![CDATA[Matt Burman]]></dc:creator><pubDate>Fri, 11 Jun 2021 14:45:09 GMT</pubDate><media:content url="https://mattburman.com/content/images/2021/06/Screen-Shot-2021-06-11-at-14.53.05.png" medium="image"/><content:encoded><![CDATA[<img src="https://mattburman.com/content/images/2021/06/Screen-Shot-2021-06-11-at-14.53.05.png" alt="How I scored 97% in the CKAD exam - Certified Kubernetes Application Developer"><p>Disclaimer: I passed my exam in June 2021 on the Kubernetes 1.20 exam. By the time you read this, some of the commands or references below could be out of date. You should tailor this advice to what works for you. That said, much of this advice should be relatively timeless.</p><p>There are many of these types of posts on the internet. Everyone has different experiences so I figured my different perspective might be useful!</p><p>I have been using Kubernetes since 2019 for personal use on GKE and using clusters at work. That was not enough to pass! I assumed that it would not take much effort to pass this exam. I was wrong. Whilst I had most of the knowledge I needed, I did not realise just how fast you need to be. This is compounded by the fact you need to do everything purely through the CLI. If you&apos;re used to any krew or editor plugins, unlearn that workflow. Get used to using barebones Kubectl and a CLI editor. You have two hours to complete 19 practical questions.</p><p>It&apos;s also worth mentioning - I didn&apos;t pass this exam first time! I&apos;m not exactly sure why, I thought I did OK, but I reckon the main reason was I wasn&apos;t switching to the default namespace when it doesn&apos;t mention a namespace. I also had an issue with the Docker Hub being rate limited that I didn&apos;t experience the second time. Either way, I passed with 97% on the second attempt a week later. I did do a few things differently and the following tips include those things!</p><h1 id="practice">Practice</h1><p>This article isn&apos;t really about learning the concepts of Kubernetes. It&apos;s more focused on how to pass the exam. I mainly learnt Kubernetes through practice at work and for personal projects.</p><p>That said, I used <a href="https://www.udemy.com/course/certified-kubernetes-application-developer/">Mumshad&apos;s course</a>. The most useful part of this for me was the labs. The great thing about these labs are that they are similar to the experience in the exam. You get a terminal to an ubuntu node with kubectl configured. You have to configure your setup in a similar way to how you would have to in the exam. I highly reccomend pretending you ARE in the exam every single time you do a lab. You don&apos;t want to be wasting time setting things up.</p><ul><li>Do Mumshad&apos;s labs</li><li>Do these <a href="https://github.com/dgkanatsios/CKAD-exercises">CKAD exercises on GitHub</a> at least once if not more.</li><li>Do the <a href="https://killer.sh">killer.sh</a> practice exam. Do this at the end once you are confident you are fast. It&apos;s not free, but it&apos;s the closest thing to the real exam you&apos;ll get.</li></ul><p>Do these. Do them again. To get fast, you need to practice. It gets tedious, but practice will make you faster.</p><h1 id="configuration">Configuration</h1><p>Set up your configuration every time you do practice labs or do any learning (apart from the OS setup which you should do once). You can use this configuration section as a checklist. It&apos;s in a rough order too. You should aim to get all of this configured in 2-3 minutes. You don&apos;t want to be wasting precious time at the start so practice makes perfect.</p><p>This section is my opinion, so feel free to modify as you see fit.</p><h2 id="os">OS</h2><p>Set up a CKAD chromium profile that is not logged into anything. Do not use this for anything other than accessing <a href="http://kubernetes.io">kubernetes.io</a> and the exam.</p><p>Set up bookmarks. I used these ones on GitHub: <a href="https://github.com/reetasingh/CKAD-Bookmarks">reetasingh/CKAD-Bookmarks</a>. Make sure whatever bookmarks you use that they are up to date and you know how to use them. You can import a bookmarks HTML file at chrome://settings/importData.</p><p>Set up search engine to search the docs at kubernetes.io/search/?q=%s and your bookmarks at chrome://bookmarks?q=%s</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mattburman.com/content/images/2021/06/image.png" class="kg-image" alt="How I scored 97% in the CKAD exam - Certified Kubernetes Application Developer" loading="lazy" width="1300" height="560" srcset="https://mattburman.com/content/images/size/w600/2021/06/image.png 600w, https://mattburman.com/content/images/size/w1000/2021/06/image.png 1000w, https://mattburman.com/content/images/2021/06/image.png 1300w" sizes="(min-width: 720px) 720px"><figcaption>A search engines setup in Chromium.</figcaption></figure><p>Remove other search engines so you don&apos;t accidentally search for something else.</p><p>I like to set my key repeat to maximum which helps in VIM or navigating the terminal.</p><p>In the real exam, close all other programs other than chromium. Reduce CPU, memory and network usage by also killing any hungry background processes (e.g. backup tools like Dropbox).</p><p>Set up two windows. It says two tabs in the guidance but I used two windows just fine. I use the left two thirds of the screen for the exam window and the right third for <a href="http://kubernetes.io">kubernetes.io</a>.</p><p>Hide any unnecessary OS UIs such as taskbars etc.</p><p>I also highly recommend using a large high resolution external monitor for the exam. Laptops must be closed during the exam when using an external display. With my first attempt, I had an issue with the proctor not liking the laptop being on my desk for some reason but they accepted it in the end and it was not an issue the second time. Must have been a misunderstanding.</p><p>My setup looked something like this. We&apos;ll get onto the terminal setup in the configuration section below!</p><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="https://mattburman.com/content/images/2021/06/Screen-Shot-2021-06-06-at-15.20.39.png" class="kg-image" alt="How I scored 97% in the CKAD exam - Certified Kubernetes Application Developer" loading="lazy" width="2000" height="1125" srcset="https://mattburman.com/content/images/size/w600/2021/06/Screen-Shot-2021-06-06-at-15.20.39.png 600w, https://mattburman.com/content/images/size/w1000/2021/06/Screen-Shot-2021-06-06-at-15.20.39.png 1000w, https://mattburman.com/content/images/size/w1600/2021/06/Screen-Shot-2021-06-06-at-15.20.39.png 1600w, https://mattburman.com/content/images/size/w2400/2021/06/Screen-Shot-2021-06-06-at-15.20.39.png 2400w"><figcaption>An example of how to set up your display for the CKAD exam</figcaption></figure><p>With this setup, the questions will be along the left edge of the monitor. Your working terminal will be roughly in the middle. Your notepad will be in the top right corner of the terminal (when open). There are commands for the exam in the top of the window.</p><p>Note: the exam window in the screenshot is a mock (<a href="https://killer.sh">killer.sh</a>) rather than the real exam but the real one is similar. I highly recommend buying the mock for practice. The mock exam window is using a different chromium profile to keep the blue CKAD profile pure from external links.</p><h2 id="become-root">Become root</h2><p><br>In Mumshad&apos;s mocks, you may already be root. In the real exam and on killer.sh, you start off with a non-root user.</p><p>I prefer to become root at the start of the exam. This is probably more useful for CKA than CKAD, but worth doing just incase.</p><p>Before you begin, take note of non-root user and their home directory.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">$ pwd
/home/k8s</code></pre><figcaption>Printing your current working directory with pwd</figcaption></figure><p>Then become the root user.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">$ sudo -i</code></pre><figcaption>Becoming the root user with sudo -i</figcaption></figure><p>Your working directory (<code>pwd</code>) will now be /root.</p><p>You may not have a <code>.kube</code> in /root so you can copy that from the non-root user. <code>.kube</code> is needed for context configuration which is needed in the exam for switching between clusters or namespaces.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">cp -r /home/k8s/.kube /root</code></pre><figcaption>Copying the non-root users .kube directory to /root</figcaption></figure><h2 id="vim">Vim</h2><p>Vim and Nano are pre-installed, but I use Vim. Vim is not required but highly recommended. I have been using Vim for around 6 years so it was the obvious choice. There is a bit of a steep learning curve with Vim. If you want to use it, make sure you know how to do enough to edit yaml files. If not, get comfortable with Nano.</p><p>I configure it with the following in <code>~/.vimrc</code></p><pre><code class="language-vimrc">set et ai ci nu sw=2 ts=2 sts=2</code></pre><p>It is worth understanding what these mean so that you can memorise it better.</p><p>I also use the paste option when I need to paste from the Kube docs. However, it tends to mess with whitespace, so you should turn it off when not needed. When pasting from the Kube docs, run <code>:set paste</code>. When done, I quit and reload vim to reload the settings from .vimrc. You can also run <code>:set nopaste</code></p><h2 id="bash">Bash</h2><p>There may be an existing <code>~/.bashrc</code> for the root user. If not, copy it from the non-root user&apos;s home directory like with <code>.kube</code>. I define some additional aliases and variables for usage throughout the exam.</p><p>I will reference these aliases throughout the rest of this article.</p><p>At the end of <code>~/.bashrc</code> I define the basic <code>k</code> alias. Some people like to enable kubectl bash autocompletion using this command but I found that the latency to the exam server was high so I&apos;m faster by not using it.</p><pre><code>alias k=&quot;kubectl&quot;</code></pre><p>then I type and copy the following line to paste it 5 or 6 times to avoid some typing.</p><figure class="kg-card kg-code-card"><pre><code class="language-bashrc">alias k=&quot;k &quot;</code></pre><figcaption>Copy this a few times to edit later avoid some typing</figcaption></figure><p>I additionally set the following aliases. I consider these the bare essential commands you will need.</p><figure class="kg-card kg-code-card"><pre><code>alias kg=&quot;k get&quot;
alias kd=&quot;k describe&quot;
alias kr=&quot;k run&quot;
alias kcr=&quot;k create&quot;
alias ka=&quot;k apply -f&quot;
alias krm=&quot;k delete&quot;</code></pre><figcaption>A set of basic kubectl bash aliases</figcaption></figure><p>Other commands you <em>may</em> find useful are explain and edit, but I did not use them. Add any other shortcut aliases you find you need.</p><p>This is a useful alias for modifying the currently selected context to change the namespace.</p><figure class="kg-card kg-code-card"><pre><code class="language-bashrc"># usage: kn &lt;namespace&gt;
alias kn=&quot;k config set-context --current --namespace&quot;
</code></pre><figcaption>A kubectl alias for modifying the currently selected context to change the namespace.</figcaption></figure><p></p><p>I also export a variable which can be used to generate YAML with the <code>kcr</code> and <code>kr</code> aliases without having to type it out fully which can be error prone.</p><pre><code class="language-bashrc">export dry=&quot;--dry-run=client -o yaml&quot;</code></pre><h2 id="tmux">Tmux</h2><p>Tmux is optional, but highly recommended. Tmux stands for &quot;terminal multiplexer&quot;. Another terminal multiplexer is GNU screen, but I use tmux.</p><p>Essentially it means you can have multiple terminals in one. It&apos;s not like on your desktop where you can use a GUI application which can spin up new terminals. You only have one terminal in the exam so tmux is useful.</p><p>If you use one, make sure you know how to use it enough that you don&apos;t get slowed down during the exam having to configure it.</p><p>I tend to use iTerm2 on MacOS normally and normally just ssh to servers in multiple iterm2 panes. So I learnt tmux specifically for the exam as that no longer applies.</p><p>Here&apos;s a list of things I learnt which was enough for the exam.</p><ul><li>Learn ctrl+b for running commands in tmux.</li><li>ctrlb+&quot; for splitting the active pane horizontally</li><li>ctrlb+% for splitting the active pane vertically</li><li>ctrlb+ arrow keys for moving the active terminal</li><li>The behaviour of mouse mode</li></ul><p>I set the following in <code>~/.tmux.conf</code> to enable mouse mode</p><figure class="kg-card kg-code-card"><pre><code class="language-conf">set -g mouse on</code></pre><figcaption>Enabling mouse mode in tmux.</figcaption></figure><p>This is recommended if you do not want to learn as many keybinds. You can click into other panes and adjust their size by dragging</p><p>You can also set this at runtime by switching to command mode with ctrl-b+: then running `set -g mouse on`. It can also be turned off at runtime with `set -g mouse off`.</p><p>One thing to bare in mind with mouse mode is sometimes you accidentally select text. To get out of this mode you just need to press ctrl+c.</p><h3 id="configure-tmux-panes">Configure tmux panes</h3><p>I would suggest doing the exam on a large, high resolution monitor so that you can see all of the resources on the screen and still have a working terminal.</p><p>You should absolutely configure the tmux panes how you want. These are just a few tips that helped me.</p><p>I keep my main operating terminal at the top at full width or split into two. You could split it into two so that the notepad in the top right goes over the rightmost terminal rather than the main one on the left. I would use the top terminal for running all the commands to generate YAML, running ad-hoc commands, and editing files in VIM.</p><p>Run this command in a tmux pane to have an updating view of the current namespace. You&apos;ll want a full width pane for this command.</p><pre><code class="language-bash">watch -n0 kubectl get all,secret,cm,netpol,pvc -o wide --show-labels</code></pre><p>The resources listed should cover most of the namespace-scoped resources that you will need. Note: <code>kubectl get all</code> does not list all resources. If you&apos;re interested in the detail, check out <a href="https://github.com/kubernetes/kubectl/issues/151">this issue</a>. Pods, deployments, jobs, cronjobs are included in <code>all</code> so do not need to be specified.</p><p><code>watch -n0</code> runs the command again 0.1 seconds after completion of the previous run, as 0.1 seconds is the minimum time supported.<br></p><p>Nodes, namespaces, and persistent volumes are cluster-scoped resources. These are worth seeing but I put them in a row of panes at the bottom because they don&apos;t change that much. They will change when you switch contexts with the <code>kubectl config set-context</code> command in the question. There should be four contexts. Most of the time you&apos;re using the first cluster but there are questions that use the others too so these commands are worth using <code>watch</code> with.</p><p>The <code>-o wide</code> and <code>--show-labels</code>options are optional.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">watch -n0 kubectl get ns --show-labels</code></pre><figcaption>Watch all of the namespaces</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-bash">watch -n0 kubectl get no --show-labels</code></pre><figcaption>Watch all of the nodes</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>watch -n0 kubectl get pv --show-labels</code></pre><figcaption>Watch all of the persistent volumes</figcaption></figure><p>It could also be worth listing the contexts in another pane. This will display all of the available contexts that you can switch to using the command shown at the start of each question. It is worth noting the <code>kn</code> alias will modify the namespace in the currently selected context. This command will show the selected context with an asterisk <code>*</code> in the first column. You can use this pane as a reminder to switch clusters and namespaces or risk getting zero for the question!</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">watch -n0 kubectl config get-contexts</code></pre><figcaption>Get all contexts to see the selected context</figcaption></figure><p></p><p>I also have a full width row for streaming events on the cluster. I don&apos;t use <code>watch</code> for this as kubectl streaming them with `-w` is more useful. This is useful to glance at for most questions when you are applying resources. You can see the errors or success of your new pods without having to run any commands manually. Additionally, it can also hint at the future questions if there are errors that keep appearing!</p><figure class="kg-card kg-code-card"><pre><code>kubectl get ev -A -w</code></pre><figcaption>Stream all events on the cluster</figcaption></figure><h2 id="notepad">Notepad</h2><p>You can actually do this step before you click &quot;Begin exam&quot; which saves a little bit of time. Open the notepad in the top right. Create a line for each question number so that you do not have to do it during the questions.</p><h1 id="editing-yaml">Editing YAML</h1><p>Generate YAML where you can. Edit it from there. When happy that the resource is ready to be applied, run <code>ka</code>. Before running <code>ka</code>, double check you are in the correct namespace as defined in the question. Use <code>default</code> if it is not otherwise specified.</p><p>Generate your yaml with the <code>$dry</code> variable you set in the <code>~/.bashrc</code> (from the configuration section of this article). These options are generally supported on any command which <em>creates</em> or <em>updates</em> Kubernetes resources. You will use it most with <code>run</code> or <code>create</code>, but may wish to use it for <code>expose</code> or <code>apply</code>.</p><p>When generating yaml, use the <code>-n</code> option to set the namespace. If you&apos;re using <code>kn</code> to switch namespaces, this is not strictly needed. However, having it defined in the YAML will ensure that it will apply to the right namespace without relying on you remembering to switch.</p><p>Generating pod YAML would look something like this.</p><figure class="kg-card kg-code-card"><pre><code class="language-kg">kr $dry web --image=nginx:alpine</code></pre><figcaption>Generate YAML for a pod.</figcaption></figure><p>Generating YAML from an existing resource</p><pre><code class="language-bash">kg po web -o yaml</code></pre><p>Generating a deployment YAML might look something like this.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">kcr $dry deploy web --image=nginx:alpine --replicas=3</code></pre><figcaption>Generate YAML for a deployment.</figcaption></figure><p>Generating a service YAML from a deployment</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">k expose $dry deploy web --port=8080 --target-port=80</code></pre><figcaption>Generate a service for a yaml.</figcaption></figure><p>Always generate resources like this because kubectl will prefill things like names and labels to decent defaults. It will set up pod selectors to target the pods correctly. One thing you must note with both the pod and deployment generation is that the container name is the value of the resource name. The question may specify that the container name is a different value so you must edit the YAML from there.</p><p>There are also a few CLI tricks that are worth learning for managing your YAML files.</p><p>Firstly, I recommend naming your files &lt;question number&gt;-&lt;resource&gt;.yaml which is faster than using directories.</p><p>It is also worth learning the short versions of common resources which you can view with <code>kubectl api-resources</code>.</p><p>Then learn the following tricks which are useful in different situations</p><p>Pipe your YAML into <code>tee</code>.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">kr $dry web --image=nginx:alpine | tee 01-po.yaml</code></pre><figcaption>Generate YAML and pipe it into tee.</figcaption></figure><p>This will output the YAML to your terminal and also to the specified file. Use this if generating the file for the first time and you are confident that you will not need to make any further edits, but you just want to check it.</p><p>Pipe your YAML into a Vim buffer.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">kr $dry web --image=nginx:alpine | vim -</code></pre><figcaption>Piping generated YAML into a vim buffer.</figcaption></figure><p>This will open the YAML in Vim which you can then save e.g. <code>:w 01-po.yaml</code>. You can then modify the YAML to add additional resources that you can&apos;t add using kubectl arguments or flags.</p><p>You can also just direct it to a file if you&apos;re 100% sure you know what you&apos;re generating.</p><figure class="kg-card kg-code-card"><pre><code>kr $dry web --image=nginx:alpine &gt; 01-po.yaml</code></pre><figcaption>Redirecting generated YAML to a file.</figcaption></figure><p>You should learn what you can and can&apos;t generate with kubectl. If you know this upfront, you can decide whether to use <code>tee</code> or whether to use a vim buffer to edit the YAML further. Use --help if you have forgotten usage.</p><p>Some things you can&apos;t generate from <code>kubectl</code>. However, you should never need to manually type out a resource completely from scratch. You either need to paste from the docs or get it from a given file or existing resource. Then you&apos;ll need to know how to edit them from there.</p><p>These are some of the resources that you can&apos;t generate:</p><ul><li>Network policies</li><li>Persistent volumes</li><li>Persistent volume claims</li></ul><p></p><p>These are some common things I find myself manually defining in YAML either through copy and pasting YAML or typing out from memory. I won&apos;t go into detail about explanations. As I said at the start, this article is not so much about content, more about exam technique.</p><ul><li>readinessProbe and livenessProbe on container. httpGet, command, and tcpSocket. periodSeconds, initialDelaySeconds.</li><li>Resource limits and requests on container</li><li>Volumes in pod spec and volumeMounts in container</li><li>Persistent volumes and persistent volume claim resources</li><li>Multi-container pods</li><li>initContainers</li><li>container ports</li><li>serviceAccountName</li><li>securityContext and capabilities for a container. You need </li></ul><p>Make sure you have bookmarked links to relevant YAML and that you know how to find them. You can alternatively use <code>kubectl explain</code> but I personally don&apos;t find it easy to use. You do not have a lot of time to be looking things up so make sure you know exactly what you are looking for and where it is. You also can treat the list as a checklist of what to learn.</p><p>Another thing that has tripped me up when practicing is defining fields twice. e.g. I defined pod resources at the top of the pod spec but it was generated with <code>resources: {}</code> at the bottom which would override the first one.</p><h1 id="general-exam-tips">General Exam Tips</h1><p>Get good sleep.</p><p>Eat well the day before</p><p>Get hydrated at least 2 hours before, then stop drinking until towards the end of the exam. You can also ask the proctor to go to the bathroom once you have done the security checks but before the exam starts.</p><p>VERY IMPORTANT is to make sure you are in the right namespace or you will get ZERO marks for the question. I reckon this is one reason why I did not pass in my first attempt. If it does not specify the namespace, use the default namespace.</p><p>If the question asks you to move a resource around (e.g. to a new namespace) , make sure you have deleted the old version.</p><p>Take notes by clicking the notes in the top right. Note down anything useful such as problem questions, high percentage questions, low percentage questions, etc. You can set up the notes before clicking &quot;Begin exam&quot;.</p><p>Use ctrl-r to search through <code>~/.bash_history</code> of previous commands to edit and re-run.</p><p>Copy and paste the names of things. Names are highlighted in the question which can be left clicked to copy. Typos are an easy mistake to make so could be an easy way to lose marks.</p><p>Check your solutions (time permitting). If you have time, try to check straight after implementing when the context is still in your head. Try to check questions again at the end - you may spot things you miss. Use the notes to guide checking importance.</p><h1 id="checking">Checking</h1><p>If you don&apos;t have much time, you will have to be strategic with your checking. Here are some quick checks that you should hopefully be able to do for most questions.</p><ul><li>Verify you have deployed in the right namespace, or default if not specified.</li><li>If an object needs to be applied, verify the properties of the deployed object with the question.</li><li>If a file needs to be copied to a specific path, verify the file exists and has the properties in the question</li><li>Verify the names of resources, containers, labels etc. e.g. The deployment name may be different from the pod name which may be different from the container name(s) etc. svc and netpol podSelectors target pod labels not deployment labels.</li></ul><p>Beyond that, there are checks that take a bit longer but can be worth doing if you have time and the questions are worth a lot of marks.</p><p>Getting a shell into a container is useful for verifying state of a container. It must have a shell installed such as <code>sh</code> or <code>bash</code> but that should be the case during the exam. Not necessarily so true for more secure production applications built on scratch images or otherwise!</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">k exec &lt;pod name&gt; -c &lt;container name&gt; -it -- sh</code></pre><figcaption>Get a shell into a container</figcaption></figure><p>From there, you can use any tooling installed in the image such as <code>env</code>, <code>wget</code>, <code>nc</code>, <code>curl</code> etc.</p><h2 id="checking-networking">Checking Networking</h2><p>A lot of this stuff is probably covered better in the docs, but here&apos;s my 2 cents on what you need for the exam.</p><p>Learn how to use curl, wget, and netcat (nc).</p><p>Learn that <code>busybox</code> images have <code>nc</code> and <code>wget</code> but NOT curl.</p><p>Learn that <code>nginx:alpine</code> images have all three tools.</p><p>Learn that <code>nc</code> can be used for direct TCP connections for non-HTTP services.</p><p>Learn how to specify the timeouts. I use 2 seconds. If you forget, you can use the help options.</p><ul><li>wget: <code>-T 2</code></li><li>curl: <code>--connect-timeout 2</code></li><li>nc: <code>-w 2</code></li></ul><p>The timeouts ensures the process will exit if you can not establish a connection.</p><p>Understand that these tools will first lookup the IP by DNS if a hostname (e.g. service or public host) is specified rather than an IP. If a network policy is applied, it must also allow UDP egress on port 53 to allow DNS lookups.</p><p>If a connection can not be established the process with exit with a non-zero exit code.</p><p>Using <code>wget</code> to make a HTTP request and output the response to stdout with <code>-O-</code> rather than the default disk.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">wget -O- -T 2 &lt;target host&gt;</code></pre><figcaption>Using <code>wget</code> to output the HTTP response to stdout with <code>-O-</code></figcaption></figure><p>Using <code>curl</code> to make a HTTP request</p><figure class="kg-card kg-code-card"><pre><code>curl --connect-timeout 2 &lt;target host&gt;</code></pre><figcaption>Using <code>curl</code> to make a HTTP request</figcaption></figure><p>Using <code>nc</code> to establish a connection to a TCP socket</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">nc -w 2 &lt;target destination&gt; &lt;target port&gt;</code></pre><figcaption>Using <code>nc</code> to establish a connection to a TCP socket</figcaption></figure><h2 id="testing-egress-network-policy-being-applied-to-a-specific-pod">Testing egress network policy being applied to a specific pod</h2><p>Get the internal IP of a ready TARGET pod from <code>kg po -o wide</code> OR the service name + service port which selects the pod which will resolve by DNS to a ready pod&apos;s internal IP.</p><p>Get the name of the source pod with labels selected by the podSelector of the network policy.</p><figure class="kg-card kg-code-card"><pre><code class="language-kubectl">kg po -l app=web</code></pre><figcaption>Selecting pods labelled app=web</figcaption></figure><p>Get a shell into the existing source pod.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">k exec &lt;pod name&gt; -it -- sh</code></pre><figcaption>Get a shell into an existing pod</figcaption></figure><p>From there you can run <code>nc</code>, <code>wget</code> or <code>curl</code> to make connections to test if a netpol is being applied as directed.</p><p>Specify the timeouts or use CTRL-C after waiting.</p><h2 id="test-a-service">Test a service</h2><p>Temporary pods can be used for testing that a service is configured with the correct target port and pod selector labels.</p><figure class="kg-card kg-code-card"><pre><code>kr tmp --rm --restart=Never -i --image=nginx:alpine -- &lt;command&gt;</code></pre><figcaption>Running a temporary pod with an nginx:alpine image</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-bash">kr tmp --rm --restart=Never -i --image=busybox -- curl --connect-timeout 2 &lt;svc name&gt;:&lt;svc port&gt;</code></pre><figcaption>Running <code>curl</code> in a temporary pod to connect to a service</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-bash">kr tmp --rm --restart=Never -i --image=nginx:alpine -- wget -O- -T 2 &lt;svc name&gt;:&lt;svc port&gt;</code></pre><figcaption>Running <code>wget</code> in a temporary pod to connect to a service</figcaption></figure><p><br>If a connection can not be established the process with exit with a non-zero exit code and <code>&lt;ns&gt;/&lt;podname&gt; terminated (error)</code> will be output.</p><h2 id="testing-a-network-policy-being-applied-to-a-specific-set-of-pod-labels">Testing a network policy being applied to a specific set of pod labels</h2><p>It is easier to exec into an existing pod than run your test in one line. However, if there is no existing pod, you may wish to run your netpol test from a temporary pod. You would need to ensure the labels from the netpol podSelector are set on the pod using <code>--labels</code>.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">spec:
  podSelector:
    matchLabels:
      key1: val1
      key2: val2</code></pre><figcaption>podSelector on a Network Policy spec</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-bash">kr tmp --labels=key1=val1,key2=val2 --rm --restart=Never -i --image=busybox -- &lt;command&gt;</code></pre><figcaption>Run a temporary pod with specific labels to match the podSelector of a network policy</figcaption></figure><p>If a connection can not be established the process with exit with a non-zero exit code and pod <code>&lt;ns&gt;/&lt;podname&gt; terminated (error)</code> will be output.</p><h1 id="conclusion">Conclusion</h1><p>If you practice with the resources and learn some of these techniques you should have a good chance at passing the exam. Some of these tips are my personal preference so do not follow this exactly. It will be you taking the exam after all, so make sure you are doing what works for you!</p><p>If there are any major issues with my suggestions feel free to reach out. If it&apos;s just a change from a kubectl update I will probably not update the post, but will otherwise.</p><p>Thanks for reading and good luck!</p>]]></content:encoded></item><item><title><![CDATA[Git Blame - the new starters best friend]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Firstly, Git Blame&apos;s name is a bit silly. I propose renaming it <code>git who-what-where-when-why</code> or <code>git 5w</code> for short.</p>
<p>From the command-line, it outputs the file you specify, but each line is annotated with who changed it last, the commit message, and when it was changed.</p>
<p>That&apos;</p>]]></description><link>https://mattburman.com/git-blame-the-new-starters-best-friend/</link><guid isPermaLink="false">5c7c56ee30a4090001b18c62</guid><category><![CDATA[programming]]></category><category><![CDATA[agile]]></category><category><![CDATA[squad]]></category><category><![CDATA[github]]></category><category><![CDATA[git]]></category><category><![CDATA[code]]></category><category><![CDATA[onboarding]]></category><dc:creator><![CDATA[Matt Burman]]></dc:creator><pubDate>Fri, 10 Aug 2018 15:38:54 GMT</pubDate><media:content url="https://mattburman.com/content/images/2019/03/2018-08-10-at-16.04-2.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://mattburman.com/content/images/2019/03/2018-08-10-at-16.04-2.png" alt="Git Blame - the new starters best friend"><p>Firstly, Git Blame&apos;s name is a bit silly. I propose renaming it <code>git who-what-where-when-why</code> or <code>git 5w</code> for short.</p>
<p>From the command-line, it outputs the file you specify, but each line is annotated with who changed it last, the commit message, and when it was changed.</p>
<p>That&apos;s not the most convenient use of it though. If you use VSCode, the GitLens extension has the best blame features. This feature is also available in other IDEs but GitLens is the best example I&apos;ve seen.</p>
<p><img src="/content/images/2018/08/2018-08-10-at-16.04-2.png" alt="Git Blame - the new starters best friend" loading="lazy"></p>
<ul>
<li>Who - Shown at the end of the current line and available on hover over the commits in the sidebar</li>
<li>What - Shown in the commit message and the actual code</li>
<li>Where - Shown in the code</li>
<li>When - Shown in the commit dates</li>
<li>Why - Shown in the commit message. Your commit messages may even have references to tickets/issues for you to quickly look up why.</li>
</ul>
<h1 id="whywasthisusefulforme">Why was this useful for me?</h1>
<p>I recently started my first full-time role that wasn&apos;t for my university. It&apos;s my first experience working on projects that have been created years ago. Our squad owns a lot of business-critical systems. Many code files in some of our critical systems have been modified by over ten people. Many of those people aren&apos;t in our squad anymore. They may have moved squads, moved tribes, or left the company.</p>
<h1 id="whyisthisusefulespeciallyfornewstarters">Why is this useful, especially for new starters?</h1>
<p>Firstly, as a new starter, you don&apos;t even know your team yet, let alone all of the people that have touched the projects your team owns. Sure, you can ask people what they have experience with, but you can&apos;t ask that question in detail - commit-by-commit, line-by-line. They won&apos;t even know anyway, no one can possibly keep track of who remembers the details of every file. By looking at who changed the code last, you can see who has most recently got parts of the code you need to change in their recent memory. At this point, if you are stuck, you may be able to ask the person, or ask if anyone knew them. People that didn&apos;t last change the code may not remember that exact change, but they likely will remember the person who changed it and their intentions. This is especially true if they are working in an agile way with daily standups, code review, squad code ownership and pair programming.</p>
<p>You can also begin to understand the history of the code. On an old codebase, there&apos;s tech debt. But you can&apos;t just go in and fix it all, it will be too much work. When you begin to understand the history of the changes, you can begin to understand why code was written in that way in the past. What would be the wrong decision now may have been right then. It may be that a decision was made then that have to stick with it for consistency. If there&apos;s parts that you can reasonably refactor, understanding the history will help you to understand the effects of your so-called &quot;minor&quot; changes. The changes in most recent months will help you to get clued in to the conversations you may overhear in standups or otherwise. After all, everyone else was around when those changes were made, they have them in some way in their memory. If you keep reading about these changes - you can begin to gain some context in your mind about what people are talking about - without having to ask.</p>
<p>It&apos;s low effort. As a new starter, you&apos;re already overwhelmed with information and unanswered questions. If you can quickly answer random questions about bits of code so you can free up others&apos; time for more important or critical questions. You will be a quicker learner overall, having intuition rather than having to be told and taught everything.</p>
<p>Don&apos;t blame your teammates for your unanswered questions, just use git blame to ask honest questions of who, what, where, when and why.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Design your optimal human condition algorithmically]]></title><description><![CDATA[How could we optimise the human condition using the personal data around us?]]></description><link>https://mattburman.com/design-your-life-by-algorithms/</link><guid isPermaLink="false">5c857bfd0fbc250001a855b6</guid><category><![CDATA[quantified self]]></category><category><![CDATA[data]]></category><category><![CDATA[big data]]></category><category><![CDATA[programming]]></category><category><![CDATA[future]]></category><category><![CDATA[idea]]></category><category><![CDATA[iot]]></category><category><![CDATA[Internet of Things]]></category><category><![CDATA[futurism]]></category><dc:creator><![CDATA[Matt Burman]]></dc:creator><pubDate>Thu, 26 Apr 2018 12:16:05 GMT</pubDate><media:content url="https://images.unsplash.com/36/yJl7OB3sSpOdEIpHhZhd_DSC_1929_1.jpg?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=5450abf621da4d7c7f291aa0de3beae1" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h1 id="idea">Idea</h1>
<img src="https://images.unsplash.com/36/yJl7OB3sSpOdEIpHhZhd_DSC_1929_1.jpg?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=5450abf621da4d7c7f291aa0de3beae1" alt="Design your optimal human condition algorithmically"><p>I use a chrome extension called Video Speed Controller to set the speed of videos that I watch. When thinking about an &#x201C;optimal&#x201D; default setting, I realised this optimal level would ideally be dynamic.</p>
<p>Ideally, I would set this level algorithmically based on data. I could start with an function to output this speed level based on the time of the day. Fairly simple. Fastest in the morning, and dropping off in the evening, as the brain should wind down for sleep.</p>
<p>However, this function mapping inputs to video speed could use many more data inputs than the current time&#x2026; data that is not just a general heuristic that &#x201C;in theory&#x201D; or &#x201C;in general&#x201D; represents my current optimal brain processing speed, but truly represents it.</p>
<h1 id="datainputs">Data inputs</h1>
<p>There are many inputs from the world of quantified self that could inform this algorithm.</p>
<p>Location. I already track this - updating every minute - near-enough real-time. I could define geobounded areas that are multipliers on the speed. At work - 1.5x, at home - 0.9x, in public - 1x. One could even classify locations into categories with defined multipliers to work for any location.</p>
<p>Heart-rate. I sort of have access to this with FitBit. Some initial thoughts&#x2026; not sure how heart rate maps to mental processing speed; not sure if I can get my heart rate data real-time enough to be meaningful. Usage is less obvious to me at this point but possibly high-yielding.</p>
<p><em>actual</em> neuronal activity. This is a thing. Electroencephalography. Or, EEG for short. The issue is right now you have to wire up electrodes to your skull, even having to lubricate your head with gel to ensure optimal connection. Not quite convenient enough. Maybe one day technology will create something that you can forget is there and still measure neuronal activity meaningfully enough to derive some insight, but we&#x2019;re not there yet.</p>
<h1 id="otheralgorithms">Other algorithms</h1>
<p>What other variables could be set based on quantified data about the self? Video playback speed is a good one, but what else? I&#x2019;ll throw some ideas out but would love to hear other people&#x2019;s ideas. Tweet me or comment!</p>
<p>Temperature of your house? Nest somewhat learns user temperature preferences with reinforcement learning, but could it determine an optimal temperature with only quantified self data?</p>
<p>Brightness and temperature (colour profile not heat) of your screens to allow more natural and healthy circadian rhythm regulation. F.lux allows one to set up a schedule of screen temperature, using fixed user preferences, the time, and the sunrise and sunset as inputs. What if we could control this better? Maybe a rule that won&#x2019;t lower the temperature unless you are at home? Maybe using ambient lighting levels from IoT devices?</p>
<h2 id="possibilities">Possibilities</h2>
<p>I&#x2019;m sure there are a lot of possibilities enabled by the combination of real-time quantified self data. It feels potentially powerful, and something that has only been enabled with recent developments in technology. Mobile phones and IoT devices for data sources. Big data architectures for processing. Cloud computing for flexible configuration.</p>
<p>What could we optimise about the human condition?</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[On Value, and why my degree isn’t valuable]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>* <em>excuse the clickbait title, I promise this article is more balanced!</em></p>
<p>There are things that others value highly, which I <em>do not</em> value at all.<br>
There are things that others value highly, which I <em>never valued</em> at all.<br>
There are things that <em>I value highly</em>, which <em>others do not</em> value</p>]]></description><link>https://mattburman.com/on-value-and-why-my-degree-isnt-valuable/</link><guid isPermaLink="false">5c7c56ee30a4090001b18c60</guid><category><![CDATA[value]]></category><category><![CDATA[skills]]></category><category><![CDATA[programming]]></category><dc:creator><![CDATA[Matt Burman]]></dc:creator><pubDate>Tue, 16 Jan 2018 00:13:42 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>* <em>excuse the clickbait title, I promise this article is more balanced!</em></p>
<p>There are things that others value highly, which I <em>do not</em> value at all.<br>
There are things that others value highly, which I <em>never valued</em> at all.<br>
There are things that <em>I value highly</em>, which <em>others do not</em> value at all.<br>
There are things that <em>I once valued highly</em>, which I <em>no longer value</em>.</p>
<p>I once valued a good degree classification. I no longer value this so highly&#x2026;</p>
<p><em>I once valued a degree.</em> I was getting a degree to find my place in the world. It was going to give me value. It was hence going to make me valuable. A degree was a catalyst to a base value level.</p>
<p>I then started my degree. I saw other students from other universities <em>gaining skills &amp; value from Hackathon events.</em> They were gaining useful, practical, valuable skills. This felt highly valuable to me &#x2013; the skills which were value creators. I started attending them and learnt a lot.</p>
<p><em>I then applied my skills</em>, my value, in work in a research group in my university. I was valued by them. <em>My skills felt valued.</em> I was further valued by external companies that I was working with.</p>
<p>I then was approached for contract work. <em>Value, in terms of hourly rate, had gone up by a factor of 5.</em> This is all from the <em>skills I have developed by myself, in my own time.</em> I felt valued. I was providing services that others could not for less. I felt I could charge more. I was understanding the potential of the skills I have developed.</p>
<p>I started the 2016&#x2013;17 academic year, and I was learning. Yet, <em>what I was learning didn&#x2019;t feel valuable.</em> It seemed that <em>learning theory was not increasing my value.</em> The once-valued degree felt much less valuable than it once did. My skills have been applied in wide-ranging contexts, yet they were mainly built outside of my degree. My degree merely supplemented them.</p>
<p>Only now do I realise where my motivation levels for actions come from. My <em>motivation for action comes from my perceived level of value potential from that action</em>. I have had <em>next-to-zero motivation to learn theoretical modules.</em> In contrast, more <em>practical modules have plenty of handles for my motivation to latch onto.</em></p>
<p><em>Skill-building and value-creating potential attracts my motivation.</em> Having felt such value from my skills has <em>led me to heavily devalue theory, creating extreme motivation polarity.</em> The most <em>theoretical, I will fail.</em> The most <em>practical, I expect to have done very well in</em>. I have provided value to a real client in the process of the most practical module. I have learnt skills I can use to generate further value.</p>
<p>This leads me to conclude: <em>I don&#x2019;t care about how well I do in my degree. The outcome of one&#x2019;s degree is not proportional to the value potential they obtain in the time to get it throughout the university experience.</em> Many people drop out of their degree due to huge value potential from other places (e.g. Gates, Zuckerberg, Jobs).</p>
<p>I&#x2019;ll still continue my degree. It has been a great experience, but not because of the modules of my degree. It&#x2019;s been great because it has provided good environments to develop my skills and value potential. I have been surrounded by people who share the same ideas of what is valuable. A degree is simply one product of going to University. Going to university has many products of varying value, like a company with many products of varying value. <em>My degree is the Microsoft Zune, the Samsung Galaxy Note 7, or the Google Plus of my University experience.</em> Microsoft (NASDAQ: MSFT), Samsung Electronics (KRX: 005930), and Google (NASDAQ: GOOG/GOOGL) still have massive value. They gained much value from elsewhere.</p>
<p><img src="https://mattburman.com/content/images/2018/01/samsung-value.png" alt="samsung-value" loading="lazy"></p>
<p><em>Samsung stock lost value in Q3 2016 due to the Note 7 recall, but it didn&#x2019;t affect their value trajectory&#x200A;&#x2014;&#x200A;it went straight back up and continued pace.<br>
I have also built a network of people through HackSheffield. It wouldn&#x2019;t have happened if I didn&#x2019;t attend university.</em></p>
<p>How much value can I create when I am not sitting in lecture theatres every day listening to things that I can&#x2019;t use straight away to generate value?</p>
<p>How much value can I create when I can use the energy I have to learn on that which I perceive to be valuable?</p>
<p>How much value can I create when I am not using my energy to learn, and am just converting my skills into value?</p>
<p>How much value can I create when I am doing the things I want to do, doing that which is naturally driven by my intrinsic motivation?</p>
<p>These questions I want to answer. My degree is getting in the way of that.</p>
<p>However, I have started a degree, and I want to finish it. It&#x2019;s just another unfinished project.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Quantity over Quality — Attend lots of Hackathons]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>The first few Hackathons may be hard. You may not be happy with what you<br>
produce. You may compare yourself to everyone else and feel bad about it. But<br>
keep persevering and you will come through it a great developer. Things get<br>
easier.</p>
<h1 id="quantityoverquality">Quantity over quality</h1>
<p>*from *<a href="http://www.amazon.com/Art-Fear-Observations-Rewards-Artmaking/dp/0961454733">Art &amp; Fear</a></p>]]></description><link>https://mattburman.com/quantity-over-quality-hackathons/</link><guid isPermaLink="false">5c7c56ee30a4090001b18c5f</guid><dc:creator><![CDATA[Matt Burman]]></dc:creator><pubDate>Thu, 11 Jan 2018 22:04:11 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>The first few Hackathons may be hard. You may not be happy with what you<br>
produce. You may compare yourself to everyone else and feel bad about it. But<br>
keep persevering and you will come through it a great developer. Things get<br>
easier.</p>
<h1 id="quantityoverquality">Quantity over quality</h1>
<p>*from *<a href="http://www.amazon.com/Art-Fear-Observations-Rewards-Artmaking/dp/0961454733">Art &amp; Fear</a><br>
by David Bales and Ted Orland:</p>
<blockquote>
<p>A ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right side solely on its quality.<br>
His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the &#x201C;quantity&#x201D; groups: fifty pounds of<br>
pots rated an A, forty pounds a B, and so on. Those being graded on &#x201C;quality,&#x201D; however, needed to produce only one pot &#x2014; albeit a perfect one &#x2014; to get an A. Well, come grading time a curious fact emerged: <strong>the works of highest quality were all produced by the group being graded for quantity</strong>. It seems that while the &#x201C;quantity&#x201D; group was busily churning out piles of work and learning from their mistakes, the &#x201C;quality&#x201D; group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.</p>
</blockquote>
<p><strong>So the more things you make, the better quality things you end up with.</strong></p>
<p>As long as you go in with a mindset that you are there to learn and persevere, your skills and knowledge will improve rapidly each time. It may not feel like it. It may feel like everyone else can learn things better than you. The reality is that they have been in your shoes and have built many more things. Do not worry about being &#x201C;good&#x201D; at your first few hackathons. This is the perfection mindset from the &#x201C;quality&#x201D; group. It is perfectly fine to attend to simply learn &#x2014; even if what you produce is not going to win.</p>
<p>After a few Hackathons, you will have learnt a lot from all of your hackathon<br>
projects. They may not be amazing, but you are getting better. You may have even won a few things. Anyone working on just a single project in a single team, such as a University project, will not have learnt as much as you. You will have met multiple sets of people each with unique sets of skills. Each team will have given you a crash-course in the technology they are most familiar with in a unique perspective that makes them successful at hackathons.</p>
<p>You can adapt what you learn from each team you meet to build up your skill-set in things you are most interested in. You will feel confident that you can attend a hackathon and create a project that you will be confident to demo. You will find it much easier to create something that is easy to demo &#x2014; and you will even win a few things. It just needs practice.</p>
<p><strong>So the more Hackathons you attend, the better quality Hackathon projects you end up with.</strong></p>
<h1 id="hackitforward">#HackItForward</h1>
<p>When you have been to a lot of hackathons and learnt a lot of things, remember the first time hacker that&#x2019;s struggling to feel valuable. Bring them into your team and teach them a thing or two. Show them things that they can use in the future. Show them what version control is. Teach them about servers. Help them with their &#x201C;hello world&#x201D;.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Use of GitHub for event organisation]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>The first part of the GitHub campus experts&#x2019; training is to write an impact proposal. My impact proposal was largely focused on ensuring future generations of organisers can effectively inherit the existing organisation and continue to make the community their own.</p>
<p>Specifically - the concept of knowledge transfer was</p>]]></description><link>https://mattburman.com/use-of-github-for-event-organisation/</link><guid isPermaLink="false">5c7c56ee30a4090001b18c5e</guid><category><![CDATA[github]]></category><category><![CDATA[project management]]></category><category><![CDATA[organisation]]></category><category><![CDATA[events]]></category><dc:creator><![CDATA[Matt Burman]]></dc:creator><pubDate>Wed, 10 Jan 2018 19:02:02 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>The first part of the GitHub campus experts&#x2019; training is to write an impact proposal. My impact proposal was largely focused on ensuring future generations of organisers can effectively inherit the existing organisation and continue to make the community their own.</p>
<p>Specifically - the concept of knowledge transfer was very central to my thoughts throughout. As it happens, GitHub itself was a potential solution for this. Knowledge transfer is important for student communities as churn rate of positions is high. Once new people join to keep the organisation going, how can they quickly get up to speed?</p>
<p>Generally, a new organising team is elected each year. But even during a single year, new people can join, and people can leave. This is largely because students often over-commit. An unpaid, voluntary role often must take less of a priority than studies or paid work.</p>
<p>The idea was to use a GitHub organisation to run a student community. At first, I wasn&apos;t confident everyone would switch. Maybe people would just stick to Slack? But I was pleasantly surprised - it seemed to work well.</p>
<h1 id="repo">Repo</h1>
<p>Repositories are used for separating concerns of the organisation. The most heavily used repo as of writing is <code>HackSheffield/3</code>, which contains everything about HackSheffield 3.0. I&apos;ll focus on that repo, but there are also repos for general community issues, publicity materials, our website, and some other projects.</p>
<p>In the directory structure are things like contract templates, signed contracts, email templates, and other assorted things. Before we can get to the point where we have signed templates, we need... Issues!</p>
<h1 id="issues">Issues</h1>
<p>Each repository has issues. So for the example of <code>HackSheffield/3</code>, we had a few types of issues as you can see by the set of labels:<br>
<img src="/content/images/2018/01/DraggedImage.af9029229a9a4fe590cfa32979b540d4.png" alt="DraggedImage.af9029229a9a4fe590cfa32979b540d4" loading="lazy"><br>
They can be further filtered by Author:<br>
<img src="/content/images/2018/01/2017-10-11-at-14.12.5d0195650c744c51b8941382121a3a4e.png" alt="2017-10-11-at-14.12.5d0195650c744c51b8941382121a3a4e" loading="lazy"><br>
and User assigned:<br>
<img src="/content/images/2018/01/2017-10-11-at-14.14.f321ea3046154e12806a9010af1510a4-1.png" alt="2017-10-11-at-14.14.f321ea3046154e12806a9010af1510a4-1" loading="lazy"></p>
<p>Generally, most tasks or discussion points for organising an event come under these categories. There may be other ones that are useful but this is what we found.</p>
<p>The most heavily used labels are probably <code>sponsorship</code> followed by <code>catering</code>. These things specifically had their own project boards...</p>
<p>As students in CS, many of us are on GitHub every day. So issues updated simply appear on our GitHub homepages. So when one person contributes, it encourage everyone else to feed back about that contribution.</p>
<p>Another way to encourage GitHub contribution was to set up a Slack integration to post there. This is effectively an alternative notification channel, allowing people to consume updates in another way (with push notifications!) on top of GitHub&apos;s email or website notifications.</p>
<h1 id="projects">Projects</h1>
<p>For the previous hackathons we used a mixture of Slack and Trello to organise the events. Slack for the main discussion, and Trello for status tracking.</p>
<p>This brought up some issues with getting people to update their Trello board. Most discussion therefore just happened on slack. No one was actively using Trello at the time for anything else. So generally it was just one person trying to keep things up to date based on Slack discussions. That&apos;s not really collaboration. Oftentimes, people would just talk on Slack and forget to add decisions to Trello. It just wasn&#x2019;t central enough to the organisation process.</p>
<p>Enter GitHub Projects. I think GitHub Projects were a much better solution for us.<br>
<img src="/content/images/2018/01/2017-10-11-at-14.24.23c5d857298047449c636356dce390d5.png" alt="2017-10-11-at-14.24.23c5d857298047449c636356dce390d5" loading="lazy"><br>
The conversation is already happening on the issues. So as soon as status changes, that issue&#x2019;s card in a project board can be moved to a different status column. In addition, when the status changes on the board, that is displayed on the issue page.<br>
<img src="/content/images/2018/01/2017-10-28-at-11.17.941d2020cce54b7f9fe6ab1b08d80d68.png" alt="2017-10-28-at-11.17.941d2020cce54b7f9fe6ab1b08d80d68" loading="lazy"></p>
<p>GitHub now feels central to the organisation process.</p>
<h1 id="vision">Vision</h1>
<p>This brings me, finally, to the visionary benefits of this system. One day, if people persevere with GitHub, people can simply look at past years&#x2019; repositories for the relevant files or issues on past events. Often, sequel events can save a lot of time and effort by looking at the context around things. Questions can be answered. Confidence can be given to organisers based on what had happened before.</p>
<p>For example&#x2026;</p>
<h2 id="sponsorship">Sponsorship</h2>
<p>&quot;Let&apos;s contact this local company about sponsorship! Can anyone find an email address? Hopefully they will respond to our blind email.&#x201D;</p>
<p>That turns into &quot;Let&apos;s contact them through our previous contacts and leverage the knowledge of the past to our advantage. Let me just look at the past repo to see what happened.&quot;.</p>
<p>This is effectively using GitHub as a CRM. Maybe a CRM would be better suited to the task, but GitHub works well for event organisers. Despite not being created specifically as a CRM - it is general purpose enough to act as one.</p>
<p>Students organising Hackathons don&apos;t want to learn professional CRM software (and check and update it daily!). But they do want to learn how to use Git and GitHub! They are learning Git and GitHub to organise their community, whilst in the process learning how developers can collaborate on code.</p>
<h2 id="vendors">Vendors</h2>
<p>With catering, you get similar benefits. You can look up what meals were there before with a simple search through issues using tags. Then, you can look at specific issues to see the progress on the caterer if updates were made. Were they professional? How much did it cost? Did they make things easy for us? All of these questions can be answered.</p>
<p>Being able to answer these questions with a quick search gives us power. It saves time. It gives us the knowledge required to make decisions with confidence. It means one organiser&#x2019;s work in 2017 is helping the organisers in 2018. At the time, they might not realise it, but they could be massively influencing the direction of the next event simply by a quick positive or negative comment on an issue.</p>
<p>This means it is important to keep issues up to date. By not updating your issue when the situation changes, you are doing a disservice not only to the current organisers but those in the future too.</p>
<h1 id="newteamfeatures">New team features</h1>
<p>At GitHub Universe, new team features were announced. Stay tuned for more on how usage adapts&#x2026; but it looks exciting.</p>
<p>If anyone has any questions, you can find me on Twitter <a href="https://twitter.com/_mattburman">@_mattburman</a> or otherwise via <a href="/">my website</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>