[{"content":"","date":null,"permalink":"/tags/books/","section":"Tags","summary":"","title":"Books"},{"content":"","date":null,"permalink":"/tags/guides/","section":"Tags","summary":"","title":"Guides"},{"content":"Link Collection #The Link Collection series offers a curated selection of informative links on diverse tech topics. Each post presents a unique compilation, exploring valuable resources and insights.\nProjects #ente-io/ente #Fully open source, End to End Encrypted alternative to Google Photos and Apple Photos - ente-io/ente\ncantino/mcfly #Fly through your shell history. Great Scott! Contribute to cantino/mcfly development by creating an account on GitHub.\nollama/ollama #Get up and running with Llama 2, Mistral, and other large language models locally. - ollama/ollama: Get up and running with Llama 2, Mistral, and other large language models locally.\nBooks #Slow Down #Why, in our affluent society, do so many people live in poverty, without access to health care, working multiple jobs and are nevertheless unable to make ends meet, with no future prospects, while the planet is burning? In his international bestseller, Kohei Saito argues that while unfettered capitalism is often blamed for inequality and climate change, subsequent calls for “sustainable growth” and a “Green New Deal” are a dangerous compromise.\nISBN: 978-1662602368\nBuy on Amazon\nThe Secret Life of Money #The Secret Life of Money leads readers on a fascinating journey to uncover the sources of our monetary desires. By understanding why money has the power to obsess us, we gain the power to end destructive patterns and discover riches of the soul.\nISBN: 9781621538158\nBuy on Amazon\nThe Nice Factor #Nice people want to be liked by everyone. They always afraid of offending so they accommodate other people above themselves and adapt their behaviour to suit what they think other people want. Nice people are people-pleasers but they feel compromised and hard done-by a lot of the time.\nISBN: 9781905745364\nBuy on Amazon\nGeneral #Simplifying the xz backdoor #Step by step I simplify the beginning of the xz backdoor so there’s no doubt of what it does.\nUnsigned Commits #I’m not going to cryptographically sign my git commits, and you shouldn’t either.\nAtuin - Magical Shell History #Sync, search and backup shell history with Atuin\nManagement #How to Build a High Performing Team - Leadership Garden #Discover how to build high-performing software engineering teams, focusing on synergy, clear goals, and shared vision. Get actionable insights and practical advice.\nLieutenants are the limiting reagent #Why don\u0026rsquo;t software companies ship more products? Why do they move more slowly as they grow? What do we mean when we say \u0026ldquo;this company lacks focus\u0026rdquo;?\nBetter to micromanage than be disengaged. #For a long time, I found the micromanager CEO archetype very frustrating to work with.\nGuides #Using Shortcuts Automations To Remind Me of Coupon Codes #Using Shortcuts Automations To Remind Me of Coupon Codes I use an app called SudShare to do my laundry. I got an email from them the other day with a coupon…\nMakefile tricks for Python projects #I like using Makefiles. They work great both as simple task runners as well as build systems for medium-size projects. This is my starter template for Python projects. Note: This blog post assumes …\nUpdating my website from my iPad! | Daniel Diaz\u0026rsquo;s Website #How I am able to use github codespaces to develop and push updates to my website, from my iPad.\n","date":"25 April 2024","permalink":"/links/link-collection-5/","section":"Links","summary":"The Link Collection series offers a curated selection of informative links on diverse tech topics. Each post presents a unique compilation, exploring valuable resources and insights.","title":"Link Collection 5"},{"content":"","date":null,"permalink":"/tags/links/","section":"Tags","summary":"","title":"Links"},{"content":"","date":null,"permalink":"/links/","section":"Links","summary":"","title":"Links"},{"content":"","date":null,"permalink":"/tags/management/","section":"Tags","summary":"","title":"Management"},{"content":"","date":null,"permalink":"/tags/news/","section":"Tags","summary":"","title":"News"},{"content":"","date":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags"},{"content":"","date":null,"permalink":"/","section":"VirtuallyTD","summary":"","title":"VirtuallyTD"},{"content":"Link Collection #The Link Collection series offers a curated selection of informative links on diverse tech topics. Each post presents a unique compilation, exploring valuable resources and insights.\nProjects # Sloth - Mac app that shows all open files and sockets #Sveinbjörn\u0026rsquo;s personal website. Also some open source software stuff.\nGitHub - mrjackwills/havn: A fast configurable port scanner with reasonable defaults #A fast configurable port scanner with reasonable defaults - GitHub - mrjackwills/havn: A fast configurable port scanner with reasonable defaults\nIntroduction | asdf #Manage multiple runtime versions with a single CLI tool\nBooks # Firestarters #Based on interviews with entrepreneurs and leaders in many walks of life, this self-help book gives readers the tools for finding success in their careers, businesses, organizations, and private lives. What is the difference between those bold enough to pursue their dreams and others who never get comfortable enough to ignite their lives? The doers are \u0026ldquo;Firestarters\u0026rdquo; and, because of them, the world is a much different, and often, better place. This motivational how-to book provides insights into the crucial difference between people who make things happen and those who only think about making an impact. Based on research from many disciplines and interviews with professionals at the top of their fields, Firestarters creates a complete roadmap to achieve personal success and make an impact in the world. The heart of the book features stories about successful entrepreneurs, CEOs, organizational leaders, and forward-looking thinkers from a variety of professions.\nISBN: 9781633883482\nBuy on Amazon\nThe Rational Optimist: How Prosperity Evolves #In a bold and provocative interpretation of economic history, Matt Ridley, the New York Times-bestselling author of Genome and The Red Queen, makes the case for an economics of hope, arguing that the benefits of commerce, technology, innovation, and change what Ridley calls cultural evolution will inevitably increase human prosperity.\nISBN: 9780007374816\nBuy on Amazon\nBillion Dollar Whale #Named a Best Book of 2018 by the Financial Times and Fortune, this \u0026ldquo;thrilling\u0026rdquo; (Bill Gates) New York Times bestseller exposes how a \u0026ldquo;modern Gatsby\u0026rdquo; swindled over $5 billion with the aid of Goldman Sachs in \u0026ldquo;the heist of the century\u0026rdquo; (Axios). Now a #1 international bestseller, Billion Dollar Whale is \u0026ldquo;an epic tale of white-collar crime on a global scale\u0026rdquo; (Publishers Weekly), revealing how a young social climber from Malaysia pulled off one of the biggest heists in history. In 2009, a chubby, mild-mannered graduate of the University of Pennsylvania\u0026rsquo;s Wharton School of Business named Jho Low set in motion a fraud of unprecedented gall and magnitude. One that would come to symbolize the next great threat to the global financial system.\nISBN: 9780316436489\nBuy on Amazon\nGeneral # 10 Years After Snowden: Some Things Are Better, Some We’re Still Fighting For #On May 20, 2013, a young government contractor with an EFF sticker on his laptop disembarked a plane in Hong Kong carrying with him evidence confirming, among other things, that the United States government had been conducting mass surveillance on a global scale. What came next were weeks of\u0026hellip;\nWorld likely to breach 1.5C climate threshold by 2027, scientists warn #UN agency says El Niño and human-induced climate breakdown could combine to push temperatures into ‘uncharted territory’\nMonitor your AWS bill #Nobody likes a surprise bill. Learn some ways to keep your AWS bill under control and avoid that end of the month panic.\nManagement # Measuring an engineering organization. #This is an unedited chapter from O’Reilly’s The Engineering Executive’s Primer. For the past several years, I’ve run a learning circle with engineering executives. The most frequent topic that comes up is career management–what should I do next? The second most frequent topic is measuring engineering teams and organizations–my CEO has asked me to report monthly engineering metrics, what should I actually include in the report? Any discussion about measuring engineering organizations quickly unearths strong opinions.\nHow To Prioritize Tasks #Shipping products is hard. What makes it hard is that typical products involve multiple teams and multiple dependencies. Navigating these challenges is non-trivial. There are technical challenges to overcome, but those are typically not the biggest blockers.\nHow to survive a toxic workplace and how to avoid creating one #Inspired by a two minute video about how the Navy Seals does it\n","date":"12 December 2023","permalink":"/links/link-collection-4/","section":"Links","summary":"The Link Collection series offers a curated selection of informative links on diverse tech topics. Each post presents a unique compilation, exploring valuable resources and insights.","title":"Link Collection 4"},{"content":"","date":null,"permalink":"/tags/linux/","section":"Tags","summary":"","title":"Linux"},{"content":"Linux Patch Management Tutorial #Step 1: Understanding Patches #A patch is a file that consists of a list of differences between one set of files and another. In software development, patches are used to update code, fix bugs, or add new features.\nStep 2: Install the Patch Tool #Most Linux distributions come with the patch utility pre-installed. If it\u0026rsquo;s not installed, you can install it using your distribution\u0026rsquo;s package manager. For example, on Centos, you would use:\ndnf install patch Step 3: Create a Patch File #To create a patch file between an original file original.c and a modified file modified.c, use the diff command:\ndiff -u original.c modified.c \u0026gt; changes.patch This command creates a file named changes.patch containing the differences.\nStep 4: Apply the Patch #To apply the patch to another copy of the original file:\npatch original.c changes.patch Example: Patching a Simple Program #Original Code (original.c) ##include \u0026lt;stdio.h\u0026gt; int main() { printf(\u0026#34;Hello, world!\\n\u0026#34;); return 0; } Modified Code (modified.c) ##include \u0026lt;stdio.h\u0026gt; int main() { printf(\u0026#34;Hello, Linux World!\\n\u0026#34;); return 0; } Creating the Patch # Save the original and modified codes in original.c and modified.c respectively.\nRun:\ndiff -u original.c modified.c \u0026gt; mypatch.patch Applying the Patch # Have another copy of original.c ready.\nApply the patch:\npatch original.c mypatch.patch ","date":"3 December 2023","permalink":"/posts/linux-patch-management/","section":"Posts","summary":"This tutorial provides a step-by-step guide on how to create and apply patches in Linux, including an example of patching a simple piece of software.","title":"Linux Patch Management Tutorial"},{"content":"","date":null,"permalink":"/tags/patch/","section":"Tags","summary":"","title":"Patch"},{"content":"","date":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts"},{"content":"Automate Dynamic DNS Updates with Gandi API and Docker #Managing a web domain can be a hassle, especially if you have a dynamic IP address. A dynamic IP address can change often, which makes it difficult to keep your DNS A record up-to-date. Fortunately, Gandi API provides a simple solution for updating DNS records programmatically.\nIn this tutorial, we\u0026rsquo;ll show you how to use the Gandi API, Docker, and shell scripting to automate the process of updating your DNS A record to reflect your current external IP address.\nFollow-up to Dynamic DNS Using Gandi This tutorial is a follow-up to the Dynamic DNS Using Gandi tutorial, which explains how to update DNS records using the Gandi API. The follow-up tutorial builds on the previous tutorial by demonstrating how to create a Docker container that runs the script as a service. By using Docker, you can package the script and its dependencies into a single container, making it easy to deploy and run on any platform. This approach ensures that the script is always running and updating your DNS records, even in the event of container restarts or system failures. In summary, this tutorial builds on the previous tutorial by demonstrating how to create a Docker container that runs the update_dns.sh script as a service, ensuring that your DNS records are always up-to-date.\nPrerequisites #Before we start, you will need the following:\nA Gandi account with an API key A domain name and a subdomain that you want to update Docker installed on your computer Setting up the environment variables #First, create a .env file with the following environment variables:\nGANDI_API_KEY=\u0026lt;api_key\u0026gt; DOMAIN=example.com SUBDOMAIN=subdomain TTL=300 IPLOOKUP=http://whatismyip.akamai.com/ Replace api_key with your Gandi API key, example.com with your domain name, subdomain with your subdomain, and 300 with your desired TTL value. The IPLOOKUP variable is the URL to check your public IP address. The default value is http://whatismyip.akamai.com/\nCreating the scripts #Now, let\u0026rsquo;s create the scripts that will update the DNS records automatically.\nCreate a start.sh file with the following content:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 #!/bin/sh # Set the log file path LOG_FILE=\u0026#34;/var/log/update_dns.log\u0026#34; # Log when the container starts echo \u0026#34;$(date): Starting container\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; # Run the update_dns script once /bin/sh /usr/local/bin/update_dns.sh # Start the cron daemon crond -L /var/log/cron.log # Tail the logs to keep the container running tail -f /var/log/update_dns.log /var/log/cron.log \u0026amp; # Log when the container stops trap \u0026#34;echo $(date): Stopping container \u0026gt;\u0026gt; $LOG_FILE\u0026#34; EXIT # Wait for the container to stop wait This start.sh script sets up the log file path, logs when the container starts, runs the update_dns.sh script once, starts the crond daemon, tails the logs to keep the container running, logs when the container stops, and waits for the container to stop.\nFinally, create an update_dns.sh file with the following content:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 #!/bin/bash # Set your Gandi API key, domain name, and subdomain GANDI_API_KEY=\u0026#34;$GANDI_API_KEY\u0026#34; DOMAIN=\u0026#34;$DOMAIN\u0026#34; SUBDOMAIN=\u0026#34;$SUBDOMAIN\u0026#34; # Set the TTL value for the DNS A record in seconds (default is 1800 seconds / 30 minutes) TTL=\u0026#34;$TTL\u0026#34; IPLOOKUP=\u0026#34;$IPLOOKUP\u0026#34; # Set the log file path LOG_FILE=\u0026#34;/var/log/update_dns.log\u0026#34; # Get the current external IP address CURRENT_IP=$(curl -s $IPLOOKUP) # Get the IP address and TTL of the DNS A record via the Gandi API DNS_INFO=$(curl -s -H \u0026#34;Authorization: Apikey $GANDI_API_KEY\u0026#34; \\ \u0026#34;https://dns.api.gandi.net/api/v5/domains/$DOMAIN/records/$SUBDOMAIN/A\u0026#34;) # Check if the DNS record exists if [ -z \u0026#34;$DNS_INFO\u0026#34; ]; then # Log an error if the DNS record doesn\u0026#39;t exist echo \u0026#34;$(date): Error: DNS record doesn\u0026#39;t exist\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; exit 1 fi # Extract the DNS IP address and TTL value from the API response DNS_IP=$(echo \u0026#34;$DNS_INFO\u0026#34; | jq -r \u0026#39;.rrset_values[0]\u0026#39;) DNS_TTL=$(echo \u0026#34;$DNS_INFO\u0026#34; | jq -r \u0026#39;.rrset_ttl\u0026#39;) # Check if the DNS IP is empty if [ -z \u0026#34;$DNS_IP\u0026#34; ]; then # Log an error if the DNS IP is empty echo \u0026#34;$(date): Error: DNS IP is empty\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; exit 1 fi # Compare the IP addresses if [ \u0026#34;$CURRENT_IP\u0026#34; != \u0026#34;$DNS_IP\u0026#34; ]; then # Log when there is an IP change echo \u0026#34;$(date): IP address changed from $DNS_IP to $CURRENT_IP\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; # Update the DNS A record via the Gandi API RESPONSE=$(curl -s -o /dev/null -w \u0026#34;%{http_code}\u0026#34; \\ -X PUT -H \u0026#34;Content-Type: application/json\u0026#34; -H \u0026#34;Authorization: Apikey $GANDI_API_KEY\u0026#34; \\ -d \u0026#39;{\u0026#34;rrset_values\u0026#34;: [\u0026#34;\u0026#39;$CURRENT_IP\u0026#39;\u0026#34;], \u0026#34;rrset_ttl\u0026#34;: \u0026#39;$TTL\u0026#39;}\u0026#39; \\ \u0026#34;https://dns.api.gandi.net/api/v5/domains/$DOMAIN/records/$SUBDOMAIN/A\u0026#34;) if [ \u0026#34;$RESPONSE\u0026#34; == \u0026#34;200\u0026#34; ] || [ \u0026#34;$RESPONSE\u0026#34; == \u0026#34;201\u0026#34; ]; then # Log when the DNS record is updated echo \u0026#34;$(date): DNS A record updated to $CURRENT_IP with TTL $TTL seconds\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; else # Log an error if the API request fails echo \u0026#34;$(date): API request failed with status code $RESPONSE\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; fi else # Log when the script is run without any IP change echo \u0026#34;$(date): IP address unchanged at $CURRENT_IP with TTL $DNS_TTL seconds\u0026#34; \u0026gt;\u0026gt; \u0026#34;$LOG_FILE\u0026#34; fi This update_dns.sh script sets up the required variables, gets the current external IP address, gets the IP address and TTL of the DNS A record via the Gandi API, checks if the DNS record exists and the DNS IP, compares the IP addresses and updates the DNS A record via the Gandi API if there is an IP change.\nBuilding the Docker container #Now, let\u0026rsquo;s create a Docker container to run our script. Create a Dockerfile with the following content:\n1 2 3 4 5 6 7 8 9 10 11 12 FROM alpine:3.15 RUN apk add --no-cache curl jq COPY update_dns.sh /usr/local/bin/ RUN chmod +x /usr/local/bin/update_dns.sh COPY start.sh /usr/local/bin/ RUN chmod +x /usr/local/bin/start.sh ENTRYPOINT [\u0026#34;/usr/local/bin/start.sh\u0026#34;] CMD [\u0026#34;crond\u0026#34;, \u0026#34;-f\u0026#34;] This Dockerfile uses the alpine:3.15 image, installs curl and jq, copies the update_dns.sh and start.sh scripts to the container, and sets start.sh as the entry point.\nNext, create a docker-compose.yml file with the following content:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 version: \u0026#34;3.9\u0026#34; services: update-dns: build: context: . dockerfile: Dockerfile volumes: - ./crontab.txt:/etc/crontabs/root - ./logs:/var/log - \u0026#34;/etc/timezone:/etc/timezone:ro\u0026#34; - \u0026#34;/etc/localtime:/etc/localtime:ro\u0026#34; env_file: - .env command: [\u0026#34;crond\u0026#34;, \u0026#34;-f\u0026#34;] This docker-compose.yml file defines a service named update-dns that builds the Docker image using the Dockerfile and sets up the required volumes, environment variables, and command to run.\nUsing crontab.txt to schedule tasks #In addition to the scripts and Dockerfile, the docker-compose.yml file in the repository references a file named crontab.txt as a volume. This file is used to schedule tasks using the cron utility.\nThe crontab.txt file in the repository contains the following line:\n*/30 * * * * /bin/sh /usr/local/bin/update_dns.sh This line specifies that the update_dns.sh script should be run every 30 minutes.\nWhen the Docker container is started, the crontab.txt file is mounted as a volume in the container\u0026rsquo;s /etc/crontabs/root directory. The cron daemon reads this file and runs the scheduled tasks at the specified intervals.\nIn summary, the crontab.txt file is used to schedule the execution of the update_dns.sh script every 30 minutes, ensuring that the DNS records are updated regularly.\nRunning the Docker container #To run the Docker container, use the following command:\ndocker-compose up -d This command builds the Docker image, creates a container, and starts the container in detached mode. The -d flag indicates that the container should run in the background.\nYou can build the container seperatly if you want to by running\ndocker build -t gandi-dyndns . You can check the logs in the /logs/ directory. There are two logs that will be output. They are cron.log and update_dns.log.\nupdate_dns.log contains all the log output fromt he script and will look something like this:\nWed Mar 22 16:26:10 UTC 2023: Starting container Wed Mar 22 16:27:01 UTC 2023: IP address changed from \u0026lt;old_ip\u0026gt; to \u0026lt;new_ip\u0026gt; Wed Mar 22 16:27:01 UTC 2023: DNS A record updated to \u0026lt;new_ip\u0026gt; with TTL 300 seconds Wed Mar 22 16:28:01 UTC 2023: IP address unchanged at \u0026lt;old_ip\u0026gt; with TTL 300 seconds Wed Mar 22 16:29:00 UTC 2023: IP address unchanged at \u0026lt;old_ip\u0026gt; with TTL 300 seconds Source code #You can find the complete source code for this tutorial on the GitHub repository virtuallytd/gandi-dyndns. The repository contains the Dockerfile, docker-compose.yml, update_dns.sh, start.sh, .env and crontab.txt files used in this tutorial.\nFeel free to fork the repository and modify the code to suit your needs.\nConclusion #In this tutorial, we have learned how to update DNS records automatically using Docker and the Gandi API. We have created a Docker container with the required scripts and environment variables, built the Docker image, and run the container in detached mode. We have also checked the logs to make sure that the scripts are running correctly.\nWith this setup, you can rest assured that your DNS records will be updated automatically, keeping your website online 24/7.\nOriginal Article Dynamic DNS Using Gandi\n","date":"26 March 2023","permalink":"/posts/automate-dynamic-dns-updates-with-gandi-api-and-docker/","section":"Posts","summary":"The article guides you through automating DNS updates with dynamic IP changes using Gandi API, Docker, and shell scripting. It extends a prior tutorial by encapsulating the script within a Docker container for easy deployment across platforms, ensuring uninterrupted DNS record updates.​","title":"Automate Dynamic DNS Updates with Gandi API and Docker"},{"content":"","date":null,"permalink":"/tags/automation/","section":"Tags","summary":"","title":"Automation"},{"content":"","date":null,"permalink":"/tags/dns/","section":"Tags","summary":"","title":"Dns"},{"content":"","date":null,"permalink":"/tags/dyndns/","section":"Tags","summary":"","title":"Dyndns"},{"content":"","date":null,"permalink":"/tags/selfhosting/","section":"Tags","summary":"","title":"Selfhosting"},{"content":"A collection of links and articles I\u0026rsquo;ve found interesting and want to share.\nGeneral # GPT-3 Is the Best Journal I’ve Ever Used\nFor the past few weeks, I’ve been using GPT-3 to help me with personal development. I wanted to see if it could help me understand issues in my life better, pull out patterns in my thinking, help me bring more gratitude into my life, and clarify my values.\nUses This\nUses This is a collection of nerdy interviews asking people from all walks of life what they use to get the job done.\nYou May Be Early, but You\u0026rsquo;re Not Wrong: A Covid Reading List\nYesterday, I came across a somber tweet by a man who’s trying to protect his family from Covid. He said, “my wife has been speaking with the principal of my children’s elementary school and that he has been advising her to file for divorce because I was clearly not well and ‘my life revolves around fear.’”\nProjects # Scraping Information From LinkedIn Into CSV using Python\nIn this post, we are going to scrape data from Linkedin using Python and a Web Scraping Tool. We are going to extract Company Name, Website, Industry, Company Size, Number of employees, Headquarters Address, and Specialties.\nAnna’s Archive\nAnna’s Archive is a project that aims to catalog all the books in existence, by aggregating data from various sources. We also track humanity’s progress toward making all these books easily available in digital form, through “shadow libraries”.\nHow to use Raycast and how it compares to Spotlight and Alfred\nMost Mac users find Spotlight, Apple’s built-in tool for searching through apps and files, to suit their needs just fine. But power users who want to have near total control over their computer (as well as access to shortcuts and tools) have often looked for other alternatives. Lately, an app called Raycast has been gaining attention as one of those options, competing with one of the community’s long-standing favorites, Alfred.\nGuides # Build a Tiny Certificate Authority For Your Homelab\nIn this tutorial, we’re going to build a tiny, standalone, online Certificate Authority (CA) that will mint TLS certificates and is secured with a YubiKey. It will be an internal ACME server on our local network (ACME is the same protocol used by Let’s Encrypt). The YubiKey will securely store the CA private keys and sign certificates, acting as a cheap alternative to a Hardware Security Module (HSM). We’ll also use an open-source True Random Number Generator, called Infinite Noise TRNG, to spice up the Linux entropy pool.\nSSH - run script or command at login\nThere a multiple use cases to run a script on login. Configuration, starting services, logging, sending a notification, and so on. I want to show you different ways to do so.\nDryer Notifications with Home Assistant GUI only\nDryer notifications using Tasmota with Home Assistant autodiscovery and automation triggers plus an energy cost calculation sensor and dryer state sensor.\nManagement # Awesome CTO\nA curated and opinionated list of resources for Chief Technology Officers and VP R\u0026amp;D, with the emphasis on startups and hyper-growth companies.\nTact Filter\nI came up with this idea several years ago in a conversation with a friend at MIT, who was regularly finding herself upset by other people who worked in her lab. The analogy worked so well in helping her to understand her co-workers that I decided to write it up and put it on the web. I\u0026rsquo;ve gotten quite a few email messages since then from other people who have also found it helpful.\nAn Exact Breakdown of How One CEO Spent His First Two Years of Company-Building\nPeople often wonder how startup CEOs spend their time. Well, I’m a bit obsessive, and I track every 15-minute increment of how I spend my time and I’ve been doing so religiously for years. A little background — as a four-time founder, I\u0026rsquo;ve historically been on the technical side of the companies, either as an individual contributor or leading engineering teams. My role as CEO of Levels is my first non-technical role.\nBooks # Essentialism: The Disciplined Pursuit of Less\nThe Way of the Essentialist involves doing less, but better, so you can make the highest possible contribution. Purchase from Amazon.de\nWhen They Win You Win\nWe don’t need another person’s opinion about what it means to be a great manager. We need to learn to lead in a way that measurably and predictably delivers more engaged employees and better business results. Purchase from Amazon.de\nSpare\nIt was one of the most searing images of the twentieth century: two young boys, two princes, walking behind their mother’s coffin as the world watched in sorrow—and horror. As Princess Diana was laid to rest, billions wondered what Prince William and Prince Harry must be thinking and feeling and how their lives would play out from that point on. Purchase from Amazon.de\n","date":"22 January 2023","permalink":"/links/link-collection-3/","section":"Links","summary":"VirtuallyTD\u0026rsquo;s Link Collection series offers a curated selection of informative links on diverse tech topics. Each post presents a unique compilation, guiding readers to valuable resources and insights.","title":"Link Collection 3"},{"content":"","date":null,"permalink":"/tags/documentation/","section":"Tags","summary":"","title":"Documentation"},{"content":"Choosing Domain Names and IPs for Documentation #When writing documentation it is a good practice to not use public/valid domain names or IP addresses. The RFC documents listed below provide domains and ips that can be used for examples or documentation purposes.\nDomains #Domains reserved for documentation are described in\nRFC2606 - Reserved Top Level DNS Names RFC6761 - Special-Use Domain Names. Top level domain names reserved for documentation:\n.test // for testing .example // for examples .invalid // obviously for invalid domain names .localhost // only pointing to the loop back IP address Second level domain names reserved for documentation:\nexample.com example.net example.org IPv4 #IPv4 addresses reserved for documentation are described in\nRFC1918 - Address Allocation for Private Internets RFC6598 - IANA-Reserved IPv4 Prefix for Shared Address Space RFC6890 - Special-Purpose IP Address Registries RFC8190 - Updates to the Special-Purpose IP Address Registries and obsolete\nRFC3330 - Special-Use IPv4 Addresses RFC5735 - Special Use IPv4 Addresses IPv4 documentation only network block is 192.0.2.0/24\nAddress space:\n10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) IPv6 #IPv6 addresses reserved for documentation are described in\nRFC3849 - IPv6 Address Prefix Reserved for Documentation. IPv4 documentation only network block is 2001:DB8::/32\n","date":"5 September 2021","permalink":"/posts/domain-names-and-ips-for-documentation/","section":"Posts","summary":"The article talks about using reserved domain names and IP addresses in documentation. It shares a list of reserved domains, IPv4, and IPv6 addresses as per RFC documents, making documentation easier and standard-compliant​.​","title":"Domain Names and IPs for Documentation"},{"content":"","date":null,"permalink":"/tags/network/","section":"Tags","summary":"","title":"Network"},{"content":"","date":null,"permalink":"/tags/technical/","section":"Tags","summary":"","title":"Technical"},{"content":"Introduction #In today\u0026rsquo;s interconnected world, efficient network configuration is key. This guide focuses on a specific aspect of network configuration on macOS: setting up DNS routing for specific domains. Ideal for those who use VPNs and wish to maintain optimal network configuration, this guide will walk you through the process step-by-step.\nOverview: Custom DNS Configuration for Specific Domains on macOS #I have been looking into a solution for using specific DNS servers for certain internal sudomains. These DNS servers are only available via VPN.\nI don\u0026rsquo;t want all my queries to go trough this internal DNS resolver, because the my usual resolver blocks ads and trackers.\nThe Effective Solution: to specify the resolver to use for a specific domain, create a file named after the domain in /etc/resolver/ and add the nameservers.\nStep-by-Step Configuration Guide #Step 1: Verify the Existence of /etc/resolver/ Directory #It\u0026rsquo;s essential to first ensure that the required directory exists on your system. This directory will hold your custom DNS configurations. First make sure the /etc/resolver/ directory exists\nmacbook:~ user$ sudo mkdir /etc/resolver/ Step 2: Creating a Domain-Specific Configuration File #Once you have confirmed the existence of the directory, the next step involves creating a file that is specific to the domain you want to configure. Create the domain file\nmacbook:~ user$ sudo vi /etc/resolver/example.com Step 3: Adding Nameservers to Your Domain File #After creating the domain-specific file, the crucial part is to add the nameservers. This determines where your DNS queries for the domain are sent. Add the nameservers to the file you just created\nmacbook:~ user$ cat /etc/resolver/example.com nameserver 192.0.2.100 Now, all queries for example.com will be resolved by 192.0.2.100.\nThe caveat with this technique is that tools like dig won\u0026rsquo;t actually resolve domains like apps and will bypass this.\nTesting Your DNS Configuration #After setting up your DNS configurations, it\u0026rsquo;s vital to test and ensure that they are working as expected.\nVerifying Configuration with \u0026lsquo;scutil \u0026ndash;dns\u0026rsquo; #A reliable way to test your configuration is by using the scutil --dns command.\nUsing \u0026lsquo;scutil \u0026ndash;dns\u0026rsquo; for Verification #Use the scutil --dns Command to Verify Configuration:\nmacbook:~ user$ scutil --dns resolver #8 domain : example.com nameserver[0] : 192.0.2.100 flags : Request A records, Request AAAA records reach : 0x00000002 (Reachable) Frequently Asked Questions #Q1: Why is custom DNS routing important on macOS?\nA: Custom DNS routing allows for more control over network traffic, particularly useful in professional settings or when using VPNs.\nQ2: Can this setup improve network security?\nA: Yes, by directing DNS queries through specific servers, you can enhance security and privacy.\nQ3: What if I encounter errors during configuration?\nA: Ensure you have admin rights and that you\u0026rsquo;re entering commands correctly. For specific issues, consult online forums or Apple support.\nConclusion\nCustom DNS routing on macOS can significantly improve your network performance, especially when dealing with internal domains over VPNs. This guide aims to simplify the process, making it accessible even to those with limited networking experience.\n","date":"26 March 2021","permalink":"/posts/macos-dns-routing-by-domain/","section":"Posts","summary":"Learn how to configure different nameservers for specific domains on macOS for optimized network performance","title":"How to Set Up DNS Routing by Domain on macOS"},{"content":"","date":null,"permalink":"/tags/mac/","section":"Tags","summary":"","title":"Mac"},{"content":"A collection of links and articles ive found interesting and want to share.\nGeneral # 20 Future Technologies That Will Change the World by 2050\nI recently shared an article called “The “Next Big Thing” in Technology : 20 Inventions That Will Change the World”, which got a few dozen thousand hits in the past couple of weeks. This calls for a sequel.\nWork Lessons from the Pandemic\nI’ve been thinking a lot about what changes in my work I’d like to keep, post-pandemic (can we even talk about a post-pandemic world? It still feels pretty far off). I’m trying to be deliberate and actionable about it.\nRemote Work: 5 Strategies for Creating Long Term Support\nMany people have been predicting that the pandemic will have a lasting impact on remote work. I came across an article the other day that stated prior to COVID-19, about 4% of the total U.S. workforce was working remotely.\nProjects # CCS811 Indoor Air Quality Sensor Driver in Rust\nWe spend an enormous amount of time indoors. The indoor air quality is often overlooked but it is actually an important factor in our health, comfort and even productivity. There are lots of things that contribute to the degradation of the indoor air quality over time.\nAdblockerGoogleSearch\nAn extension that removes ads from google search results and moves real results up!\nbunkerized-nginx\nDocker image secured by non-exhaustive list of features: HTTPS support with transparent Let\u0026rsquo;s Encrypt automation State-of-the-art web security, HTTP security headers, hardening etc.\nGuides # YubiKey for SSH, Login, 2FA, GPG and Git Signing\nI\u0026rsquo;ve been using a YubiKey Neo for a bit over two years now, but its usage was limited to 2FA and U2F. Last week, I received my new DELL XPS 15 9560, and since I am maintaining some high impact open source projects, I wanted the setup to be well secured.\nTraefik: canary deployments with weighted load balancing\nTraefik is the Cloud Native Edge Router yet another reverse proxy and load balancer. Omitting all the cloud-native buzzwords, what really makes Traefik different from Nginx, HAProxy, and alike is the automatic and dynamic configurability it provides out of the box.\nBuilding Serverless Microservices – Picking the right design\nIn the last article, we built a serverless microservice. But one microservice on its own doesn’t do much “by design”. In this post, we will start accelerating our microservice game, by adding a “simple” requirement that will shape our system.\nManagement # Optimize Onboarding\nIt takes roughly 2 weeks to form a habit; it takes roughly two weeks to get comfortable in a new environment. A common mistake is to treat a new report’s first couple weeks like college orientation - social, light hearted, get-to-know-you stuff.\nLearn The \u0026ldquo;Disagree and Commit\u0026rdquo; Exercise for Better Leadership\nWhat can make us incredibly valuable at work - our willingness to disagree openly and commit to helping others succeed or sticking to our arguments even when others have moved forward and a decision has been made.\nHow to (Actually) Change Someone’s Mind\nIf you’re a leader, it’s likely that not everyone who works with you will agree with the decisions you make — and that’s okay. Leadership involves making unpopular decisions while navigating complex relationships with colleagues, partners, and clients.\n","date":"10 February 2021","permalink":"/links/link-collection-2/","section":"Links","summary":"VirtuallyTD\u0026rsquo;s Link Collection series offers a curated selection of informative links on diverse tech topics. Each post presents a unique compilation, guiding readers to valuable resources and insights.","title":"Link Collection 2"},{"content":"A collection of links and articles ive found interesting and want to share.\nGeneral # Why senior engineers get nothing done\nYou start with writing code and delivering fantastic results. You\u0026rsquo;re killing it, and everybody loves you! Rock on. Then your code hits production.\nEntropy Explained, With Sheep\nLet\u0026rsquo;s start with a puzzle. Why does this gif look totally normal?\nTech Trends for 2021 and Beyond\nHow much is being invested in Europe and worldwide in tech trends such as Blockchain, Artificial Intelligence, IoT and 3D Printing, both now and in the coming years, and which countries are ahead of the rest of Europe?\nProjects # Your next meeting always before your eyes\nMeetingBar works on macOS with your calendar. Join and create meetings in one click.\nHow to Use tmux on Linux (and Why It\u0026rsquo;s Better Than Screen)\nThe tmux command is a terminal multiplexer, like screen. Its advocates are many and vocal, so we decided to compare the two.\narthepsy/ssh-audit\nSSH-audit is a tool for ssh server auditing.\nGuides # Making a smart meter out of a dumb one for $4\nAs a geek who has a few servers and other devices at home, I can\u0026rsquo;t stop thinking about my power output. I always wanted live stats on my power consumption.\nAutomating your GitHub routine\nLike many developers in the realm of Software Engineering, we are using git as our version control system.\nBuilding a self-updating profile README for GitHub\nGitHub quietly released a new feature at some point in the past few days: profile READMEs.\nUsing Ansible to automate my Macbook setup\nI am soon going to get a new Macbook, and have been thinking about how to set it up quickly and easily.\nManagement # Thoughts on giving feedback\nA good, blameless feedback culture is essential for working together efficiently as it forms healthy relationships, fuels personal and professional growth and aligns us with common norms.\nExpiring vs. Permanent Skills\nRobert Walter Weir was one of the most popular instructors at West Point in the mid-1800s, which is odd at a military academy because he taught painting and drawing.\n","date":"3 January 2021","permalink":"/links/link-collection-1/","section":"Links","summary":"VirtuallyTD\u0026rsquo;s Link Collection series offers a curated selection of informative links on diverse tech topics. Each post presents a unique compilation, guiding readers to valuable resources and insights.","title":"Link Collection 1"},{"content":"I\u0026rsquo;ve wanted to decrease my reliance on Google products recently and have decided a quick way for me to do this is to host my own CardDav and CalDav server using Radicale\nCalDav can be used to host your own calendar server and CardDav is for your own contacts server.\nRadicale Configuration #Install Python #The Radicale application is written in python and as such the python package and pip are needed to set it up.\n[root@server ~]# yum -y install python36 Install Radicale #[root@server ~]# python3 -m pip install --upgrade radicale Create Radicale User and Group #[root@server ~]# useradd --system --user-group --home-dir /var/lib/radicale --shell /sbin/nologin radicale Create Radicale Storage #[root@server ~]# mkdir -p /var/lib/radicale/collections [root@server ~]# chown -R radicale:radicale /var/lib/radicale/collections [root@server ~]# chmod -R o= /var/lib/radicale/collections Create Radicale Config #Create the configuration file [root@server ~]# vi /etc/radicale/config\nAdd the following to the configuration file\n[server] hosts = 127.0.0.1:5232 max_connections = 20 # 100 Megabyte max_content_length = 100000000 # 30 seconds timeout = 30 ssl = False [encoding] request = utf-8 stock = utf-8 [auth] type = htpasswd htpasswd_filename = /var/lib/radicale/users htpasswd_encryption = md5 [storage] filesystem_folder = /var/lib/radicale/collections Add Radicale Users #Create a new htpasswd file with the user \u0026ldquo;user1\u0026rdquo; [root@server ~]# printf \u0026#34;user1:`openssl passwd -apr1`\\n\u0026#34; \u0026gt;\u0026gt; /var/lib/radicale/users Password: Verifying - Password:\nAdd another user [root@server ~]# printf \u0026#34;user2:`openssl passwd -apr1`\\n\u0026#34; \u0026gt;\u0026gt; /var/lib/radicale/users Password: Verifying - Password:\nCreate Radicale Systemd Script #Create the systemd script [root@server ~]# vi /etc/systemd/system/radicale.service\nAdd the following to the systemd service file 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [Unit] Description=A simple CalDAV (calendar) and CardDAV (contact) server After=network.target Requires=network.target [Service] ExecStart=/usr/bin/env python3 -m radicale Restart=on-failure User=radicale # Deny other users access to the calendar data UMask=0027 # Optional security settings PrivateTmp=true ProtectSystem=strict ProtectHome=true PrivateDevices=true ProtectKernelTunables=true ProtectKernelModules=true ProtectControlGroups=true NoNewPrivileges=true ReadWritePaths=/var/lib/radicale/collections [Install] WantedBy=multi-user.target Systemd Radicale Service #Reload Systemd #[root@server ~]# systemctl daemon-reload Start Radicale Service #[root@server ~]# systemctl start radicale Radicale Service Autostart #[root@server ~]# systemctl enable radicale Check the status of the service #[root@server ~]# systemctl status radicale View all log messages #[root@server ~]# journalctl --unit radicale.service By here you should be able to connect locally to http://127.0.0.1:5232. Next we will configure Nginx to sit in front of the Radicale service and proxy all requests.\nInstall Nginx #[root@server ~]# yum -y install nginx Nginx Configuration: #Add the following configuration to the server block in nginx.conf (Or this can be added to a virtual host)\nlocation /radicale/ { proxy_pass http://127.0.0.1:5232/; proxy_set_header X-Script-Name /radicale; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass_header Authorization; } Check Nginx Configuration #[root@server ~]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Start Radicale Service #[root@server ~]# systemctl restart nginx Open Firewall Ports #Open the firewall ports as needed (80: http or 443: https) [root@server ~]# firewall-cmd --add-port=80/tcp --permanent\n[root@server ~]# firewall-cmd --add-port=443/tcp --permanent [root@server ~]# firewall-cmd --reload Once you restart Nignx you should be able to access radicale on a normal http or https port by browsing to http://example.com/radicale/ and you should see the login screen.\nScreensot of Radicale Login Login To Radicale #User the username and password you created in the steps above to login to the Radicale portal.\nScreensot of Radicale Collection Create A Collection (Cal) #Click \u0026ldquo;Create new addressbook or calendar\u0026rdquo;\nScreensot of Radicale Cal Creation Fill it in with what ever details you want, then click create.\nScreensot of Radicale Cal Created You should now be able to add that url to a cal dav enabled client and authenticate and then you should be able to see and sync your calendar.\nFor further configuration options take a look at the Radicale Page\n","date":"23 September 2020","permalink":"/posts/radicale-carddav-and-caldav-server/","section":"Posts","summary":"To decrease reliance on google services this howto describes how to setup your own contact (carddav) and calendar (caldav) server","title":"Radicale CardDAV And CalDAV Server"},{"content":"Updating Gandi DNS Using the API and a Shell Script #In this tutorial, we will show you how to update your Gandi DNS records using the Gandi API and a shell script. This approach is useful for those who have dynamic IP addresses and need to keep their DNS records up-to-date.\nBy using the Gandi API and a shell script, you can automate the process of updating your DNS records, ensuring that your website or application is always available at the correct IP address.\nIn this tutorial, we will walk you through the process of creating a shell script to update your DNS records, and scheduling the script to run automatically.\nRead on to learn how to update your Gandi DNS records using the API and a shell script.\nFurther Reading If you\u0026rsquo;re interested in automating dynamic DNS updates with the Gandi API and Docker, you might find my follow-up article Automate Dynamic DNS Updates with Gandi API and Docker helpful. This article goes into more detail on how to set up the Docker container, including how to build the Docker image, run the container, and schedule tasks using cron. It also covers best practices for running Docker containers in production environments. Check out the article for more information and step-by-step instructions.\nDynamic DNS #Dynamic DNS is a way to associate a changing Dynamic IP address (usually residential xDSL connections) to a static domain name (DNS record)\nThis allows you to connect to home.example.com -\u0026gt; DNS Lookup \u0026amp; Resolution -\u0026gt; 203.0.113.78. The DNS entry for home.example.com is updated automatically at set intervals or when an IP address change is detected.\nDynamic DNS Providers #There are a number of Dynamic DNS providers that can be used, a well known provider is https://dyn.com/, but unfortunately some of these services come with a cost.\nGandi Live DNS #https://www.gandi.net/en provides a Live DNS Service\nLiveDNS is Gandi\u0026rsquo;s upcoming DNS platform, a completely new service that offers its own API and its own nameservers.\nThe new platform offers powerful features to manage DNS Zone templates that you can integrate into your own workflow. Features include bulk record management, association with multiple domains, versioning and rollback.\nImplementation #The below instructions will show you how to create a Dynamic DNS system using a single script and Gandi\u0026rsquo;s LiveDNS.\nPrerequisites #Make sure you have the following applications installed:\ncurl - https://curl.haxx.se/ jq - https://stedolan.github.io/jq/ Gandi LiveDNS API Key - Retrieve your API Key from the \u0026ldquo;Security\u0026rdquo; section in the Account Admin Panel Bash Script #Create a bash script and put it under \u0026ldquo;/usr/local/bin/dyndns_update.sh\u0026rdquo;. (This can of course be kept wherever you want)\nAdd the API key you got from the Gandi Account Panel.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 #!/bin/bash # This script gets the external IP of your systems then connects to the Gandi # LiveDNS API and updates your dns record with the IP. # Gandi LiveDNS API KEY API_KEY=\u0026#34;............\u0026#34; # Domain hosted with Gandi DOMAIN=\u0026#34;example.com\u0026#34; # Subdomain to update DNS SUBDOMAIN=\u0026#34;dynamic\u0026#34; # Get external IP address EXT_IP=$(curl -s ifconfig.me) #Get the current Zone for the provided domain CURRENT_ZONE_HREF=$(curl -s -H \u0026#34;X-Api-Key: $API_KEY\u0026#34; https://dns.api.gandi.net/api/v5/domains/$DOMAIN | jq -r \u0026#39;.zone_records_href\u0026#39;) # Update the A Record of the subdomain using PUT curl -D- -X PUT -H \u0026#34;Content-Type: application/json\u0026#34; \\ -H \u0026#34;X-Api-Key: $API_KEY\u0026#34; \\ -d \u0026#34;{\\\u0026#34;rrset_name\\\u0026#34;: \\\u0026#34;$SUBDOMAIN\\\u0026#34;, \\\u0026#34;rrset_type\\\u0026#34;: \\\u0026#34;A\\\u0026#34;, \\\u0026#34;rrset_ttl\\\u0026#34;: 1200, \\\u0026#34;rrset_values\\\u0026#34;: [\\\u0026#34;$EXT_IP\\\u0026#34;]}\u0026#34; \\ $CURRENT_ZONE_HREF/$SUBDOMAIN/A Run The Script #I would set this script to run via crontab every 30 minutes. This ensures with an IP change the Dynamic DNS would only be out of date for a maximum of 30 minutes.\nEdit crontab with the following command\n[root@server ~]# crontab -e Add the following lines to run the script every 30 minutes.\n*/30 * * * * /bin/bash /usr/local/bin/dyndns_update.sh Once the script runs it should update the dynamic.example.com dns entry with the external IP that was found by the script.\n","date":"12 November 2019","permalink":"/posts/dynamic-dns-using-gandi/","section":"Posts","summary":"This is a technical article about how to setup Dynamic DNS using Gandi.net Live DNS system.​","title":"Dynamic DNS Using Gandi"},{"content":"Background #Are you trying to extract the contents of an RPM file on your Mac? I found myself in a similar situation, wanting to view the standard contents of a configuration file stored inside an RPM. Here\u0026rsquo;s a guide on how to open and extract an RPM file on MacOS.\nProcedure #First, download and install Homebrew on MacOSX.\nMacBook:~ user$ /bin/bash -c \u0026#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\u0026#34; After successfully installing Homebrew, it\u0026rsquo;s time to install the rpm2cpio utility. This tool will be crucial to our task of extracting the RPM on MacOS.\nMacBook:~ user$ brew install rpm2cpio ==\u0026gt; Downloading https://formulae.brew.sh/api/formula.jws.json ######################################################################### 100.0% ==\u0026gt; Downloading https://formulae.brew.sh/api/cask.jws.json ######################################################################### 100.0% ==\u0026gt; Fetching rpm2cpio ==\u0026gt; Downloading https://ghcr.io/v2/homebrew/core/rpm2cpio/manifests/1.4-1 ######################################################################### 100.0% ==\u0026gt; Downloading https://ghcr.io/v2/homebrew/core/rpm2cpio/blobs/sha256:a0d766ccb ==\u0026gt; Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sh ######################################################################### 100.0% ==\u0026gt; Pouring rpm2cpio--1.4.arm64_ventura.bottle.1.tar.gz 🍺 /opt/homebrew/Cellar/rpm2cpio/1.4: 3 files, 3.2KB ==\u0026gt; Running `brew cleanup rpm2cpio`... With the rpm2cpio utility installed, you can now extract the RPM package in your MacOS. Run the following command to extract the contents of the RPM file.\nMacBook:~ user$ rpm2cpio chrony-4.3-1.el9.x86_64.rpm | cpio -idmv ./etc/chrony.conf ./etc/chrony.keys ./etc/dhcp/dhclient.d/chrony.sh ./etc/logrotate.d/chrony ./etc/sysconfig/chronyd ./usr/bin/chronyc ./usr/lib/.build-id ./usr/lib/.build-id/27 ./usr/lib/.build-id/27/22526e8b01c2e304dae76c95b96d08368d541b ./usr/lib/.build-id/bc ./usr/lib/.build-id/bc/b4a77a141da491a2df6664d74de0193e276d7c ./usr/lib/NetworkManager ./usr/lib/NetworkManager/dispatcher.d ./usr/lib/NetworkManager/dispatcher.d/20-chrony-dhcp ./usr/lib/NetworkManager/dispatcher.d/20-chrony-onoffline ./usr/lib/systemd/ntp-units.d/50-chronyd.list ./usr/lib/systemd/system/chrony-wait.service ./usr/lib/systemd/system/chronyd.service ./usr/lib/sysusers.d/chrony.conf ./usr/sbin/chronyd ./usr/share/doc/chrony ./usr/share/doc/chrony/FAQ ./usr/share/doc/chrony/NEWS ./usr/share/doc/chrony/README ./usr/share/licenses/chrony ./usr/share/licenses/chrony/COPYING ./usr/share/man/man1/chronyc.1.gz ./usr/share/man/man5/chrony.conf.5.gz ./usr/share/man/man8/chronyd.8.gz ./var/lib/chrony ./var/log/chrony 1253 blocks And there you have it, a simple and effective method to open and extract an RPM file on MacOS. Now you can navigate and explore the contents of your RPM file as needed.\nUpdates #2023-05-22: This article has been recently updated to reflect the latest commands for the Homebrew package manager and to illustrate the extraction process using a current RPM file.\n","date":"29 October 2019","permalink":"/posts/extract-an-rpm-package-on-macos/","section":"Posts","summary":"Discover how to open and extract RPM files on MacOS with our step-by-step guide. Whether you\u0026rsquo;re looking to view the contents of a configuration file or explore an RPM package, our article provides all the necessary instructions.​","title":"How to open and extract RPM file on MacOS"},{"content":"","date":null,"permalink":"/tags/rpm/","section":"Tags","summary":"","title":"Rpm"},{"content":"","date":null,"permalink":"/tags/article/","section":"Tags","summary":"","title":"Article"},{"content":"Overview #If you manage a team, or are looking at hiring, you need to gauge people. Technical abilities are important, but they are not the most critical criteria. Most importantly is attitude. A person\u0026rsquo;s attitude shapes their behaviour toward people around them. In almost all environments and organisation, no one works alone. For developers, they could be working with other developers, with a DevOps Engineer or a project/product manager, etc.\nThe ability to collaborate well with people is extremely important and attitude drives that. A good employee or teammate requires less of their technical skillset, and more of their attitude when working with others. When evaluating people in a team, or in general in any organisation, we can categorise people in four types: Adders, Subtractors, Multipliers, and Dividers.\nThe Different Types #Adders #These are the type of people you want in your team. They always deliver with tremendous results. They are never a burden for the team as they hold their weight with excellent performance. They know what tasks need to be done and how to achieve it. They are capable of the work and they bring more benefits to the organisation than their costs.\nSubtractors #These are the type of people you want to avoid having. You can typically spot this type of person during the interview process. However, sometimes you\u0026rsquo;re stuck with them by inheritance. Subtractors are usually good people, well-liked employees. But their performance is not up to standard.\nDon\u0026rsquo;t mislabel subtractors alongside junior employees. Junior employees are new to the job and may not have all the skills required, so it\u0026rsquo;s understandable if their performance is coming up short. Subtractors are not new to the job yet they aren\u0026rsquo;t producing more than they cost. Sometimes subtractors have the required skillset but their results are sloppy or require assistants constantly from other team members.\nThe good news is we can coach and turn subtractors into adders, given that they have the ability to learn and they have an attitude toward learning. Once turned into adders, they will become a very loyal employee who will grown with the organisation.\nMultipliers #Multipliers are adders who not only perform well individually but can also motivate and help others. They are productive and tend to be very proactive and have leadership skills. They know how to collaborate, how to manage up and down. They know how to communicate with the internal team as well as external partners. Most importantly, they motivate, encourage, and lift the team spirit through their work energy.\nDividers #Dividers are subtractors that not only cannot perform, but also damage your team environment. They don\u0026rsquo;t have accountability for their work. They always come up with excuses instead of realising how they may have underperformed. They backtalk and form side conversations to bad-mouth someone or decision. They are toxic to your team environment. The longer you have them, the more damage they will do to your team culture and morale.\n","date":"11 August 2019","permalink":"/posts/asmd-the-types-of-people/","section":"Posts","summary":"This article looks at the different types attitudes of people using a simple categorisation of Adders, Subtractors, Multipliers and Dividers.","title":"The ASMD Types of People"},{"content":"","date":null,"permalink":"/tags/cache/","section":"Tags","summary":"","title":"Cache"},{"content":"","date":null,"permalink":"/tags/centos/","section":"Tags","summary":"","title":"Centos"},{"content":"","date":null,"permalink":"/tags/security/","section":"Tags","summary":"","title":"Security"},{"content":"Tripwire is an Intrusion Detection System. It is used to secure systems and creates a unique fingerprint of how a system is configured. It continually checks the system against this fingerprint and if there are any inconsistencies between the fingerprint and the current system it is logged and a report generated. This is a sure-fire way to tell if a system has been changed without your knowledge. This post will guide you through installation and configuration of Tripwire IDS running on a CentOS 7 system.\nInstall Tripwire #Install tripwire IDS from the yum repositories.\nAdd EPEL Repository #First enable the EPEL Repository.\n[root@server ~]# yum -y install epel-release Install the Tripwire Application #Install the Tripwire binaries.\n[root@server ~]# yum -y install tripwire Backup Original Configuration #Backup the original Tripwire configuration files before making any changes.\n[root@server ~]# mkdir ~/tripwire_backup [root@server ~]# cp /etc/tripwire/twcfg.txt ~/tripwire_backup/twcfg.txt [root@server ~]# cp /etc/tripwire/twpol.txt ~/tripwire_backup/twpol.txt Directory Checking #Change \u0026lsquo;LOOSEDIRECTORYCHECKING\u0026rsquo; to true.\n[root@server ~]# sed -i \u0026#39;/^LOOSEDIRECTORYCHECKING/ s/false/true/g\u0026#39; /etc/tripwire/twcfg.txt Create Keys #Create the keys to secure Tripwire.\n[root@server ~]# /usr/sbin/tripwire-setup-keyfiles Initialise DB #Initialise the Tripwire database. (A list of errors will be displayed these will be fixed later on, so are safe to ignore)\n[root@server ~]# tripwire --init A message should be displayed that the database was successfully generated.\nFix Errors #Tripwire checks a number of different settings on the system, it will check for a configuration that may not actually be included on your system and produce an error. This step will remove those errors. Create a folder for the update process and change into that directory.\n[root@server ~]# mkdir ~/tripwire_update [root@server ~]# cd ~/tripwire_update Collect all the errors and log them to a file.\n[root@server ~]# tripwire --check | grep \u0026#34;Filename:\u0026#34; | awk {\u0026#39;print $2\u0026#39;} \u0026gt;\u0026gt; ./tripwire_errors Copy the policy file\n[root@server ~]# cp /etc/tripwire/twpol.txt ~/tripwire_update/twpol.txt Create the bash script below to parse the errors file and fix the issues in the Tripwire policy file.\n[root@server ~]# cat \u0026lt;\u0026lt;\u0026#39;EOF\u0026#39; \u0026gt;\u0026gt; ~/tripwire_update/tripwire_fix_script.sh #!/bin/sh TWERR=\u0026#34;./tripwire_errors\u0026#34;; TWPOL=\u0026#34;./twpol.txt\u0026#34;; export IFS=$\u0026#39;\\n\u0026#39; for i in $(cat $TWERR); do if grep $i $TWPOL then sed -i \u0026#34;s!$i!# $i!g\u0026#34; $TWPOL fi done EOF Run the script.\n[root@server ~]# sh ./tripwire_fix_script.sh Copy the updated Tripwire policy file back to the original location.\n[root@server ~]# cp ~/tripwire_update/twpol.txt /etc/tripwire/twpol.txt Update the tripwire database from the tripwire policy that was created.\n[root@server ~]# tripwire --update-policy -Z low /etc/tripwire/twpol.txt Run a tripwire check. This check will generate a Tripwire Report usually located in /var/lib/tripwire/report/\n[root@server ~]# tripwire --check Run a check #[root@server ~]# /etc/cron.daily/tripwire-check Update (Again) #Update again to fix the errors that will be displayed because we have updated the policy file. Change YYYYMMDD \u0026amp; HHMMSS to the date and time that you ran the first check.\nTo find the latest one just run\n[root@server ~]# ls -la /var/lib/tripwire/report/ Update the errors\n[root@server ~]# tripwire --update --twrfile /var/lib/tripwire/report/server-YYYMMDD-HHMMSS.twr Email Reports #Make sure you have mail installed\n[root@server ~]# yum -y install mailx Next change the Tripwire cron job to send an email report out.\nOpen the cron job file for the tripwire check\n[root@server ~]# vi /etc/cron.daily/tripwire-check Change the following line\ntest -f /etc/tripwire/tw.cfg \u0026amp;\u0026amp; /usr/sbin/tripwire --check to (Make sure to update the server name and email address of where you want the report to go to)\ntest -f /etc/tripwire/tw.cfg \u0026amp;\u0026amp; /usr/sbin/tripwire --check | /bin/mail -s \u0026#34;File Integrity Report (Tripwire) - servername\u0026#34; user@domain.tld Directory Checking (Revert) #Now we need to set Loose Directory Checking back to false.\n[root@server ~]# sed -i \u0026#39;/^LOOSEDIRECTORYCHECKING/ s/true/false/g\u0026#39; /etc/tripwire/twcfg.txt Testing #We need to test the cronjob to make sure that it will run, create the report and email it out to the address specified.\n[root@server ~]# /etc/cron.daily/tripwire-check If no errors were encountered you should have a working tripwire setup, if any changes are made to your file system you will see them in the report that gets emailed out to you everyday. If you have made changes to the system don\u0026rsquo;t forget to update, otherwise you will just see the errors growing and wont be able to tell if something has actually changed.\n","date":"11 June 2019","permalink":"/posts/tripwire-ids-security-on-centos-7/","section":"Posts","summary":"This technical article describes how to setup Tripwire IDS on a CentOS 7 system to protect it from any intrusions.","title":"Tripwire IDS Security on CentOS 7"},{"content":"Varnish is a web cache and http accelerator. It is used improve the performance of dynamic websites by caching pages and then serving the cached version rather than dynamically creating them every time they are requested.\nInstall Varnish #Install Varnish from the Varnish repositories.\nAdd Varnish Repository #The first thing you need to do is add and enable the Varnish repository. Follow the link to install the correct version https://www.varnish-cache.org/installation/redhat\nInstall the Varnish Application #[root@server ~]# yum install varnish Configure Varnish to work with Apache #We now need to enable the configuration.\nEnable Configuration #Open the varnish config file\n[root@server ~]# vi /etc/sysconfig/varnish Scroll down to the Alternative Configurations. The easiest way to configure Varnish is to enable configuration 2. Comment out with a # all the other alternative configurations. The configuration should look like the below snippet.\n## Alternative 2, Configuration with VCL # # Listen on port 80, administration on localhost:6082, and forward to # one content server selected by the vcl file, based on the request. Use a # fixed-size cache file. # DAEMON_OPTS=\u0026#34;-a :80 \\ -T localhost:6082 \\ -f /etc/varnish/default.vcl \\ -u varnish -g varnish \\ -S /etc/varnish/secret \\ -s file,/var/lib/varnish/varnish_storage.bin,1G\u0026#34; Line 7 tells Varnish to listen on port 80 for web traffic. Line 8 tells Varnish to listen on localhost port 6082 for admin traffic. Line 9, tells Varnish to load the default.vcl. Line 10 is the user and group to varnish under. Line 11 is the Varnish secret key. Line 12 is what method for Varnish to store the cached information and to what size to allow it to grow.\nConfigure Default VCL #Open the default vcl file.\n[root@server ~]# vi /etc/varnish/default.vcl edit the \u0026ldquo;backend default\u0026rdquo; section to look like the below.\nbackend default { .host = \u0026#34;127.0.0.1\u0026#34;; .port = \u0026#34;8080\u0026#34;; } This tells Varnish to send all traffic to localhost (127.0.0.1) on port 8080. This is the port and ip that apache will be listening on.\nConfigure Apache to work with Varnish #Next we need to configure Apache to work with Varnish.\nConfigure Apache (Main) #Open the apache config file\n[root@server ~]# vi /etc/httpd/conf/httpd.conf Change the \u0026ldquo;Listen\u0026rdquo; line to the following\n# # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, in addition to the default. See also the # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses (0.0.0.0) # Listen 127.0.0.1:8080 This makes Apache listen on 127.0.0.1 on port\nConfigure Apache (Virtual Hosts) #If you run virtual hosts on apache you will also need to reconfigure them to listen on 127.0.0.1 on port 8080 too. Change the \u0026ldquo;NameVirtualHost\u0026rdquo; to look like this\nNameVirtualHost 127.0.0.1:8080 You will also need to change each Virtual Host section to listen on 127.0.0.1 on port 80. Below is an example.\n\u0026lt;VirtualHost 127.0.0.1:8080\u0026gt; ServerName example.com ServerAdmin webmaster@example.com DocumentRoot /var/www/example.com/htdocs ErrorLog /var/www/example.com/logs/www.example.com.error.log CustomLog /var/www/example.com/logs/www.example.com.access.log combined \u0026lt;/VirtualHost\u0026gt; Forward User IPs to Logs #You may have seen that the web servers logs only display 127.0.0.1 as the source IP. This causes problems when you need to run stats on the log file, as you loose quite a bit of information from loosing the IPs. This is quite an easy fix.\nUpdate default VCL #Open the default.vcl\n[root@server ~]# vi /etc/varnish/default.vcl You need to update the default vcl with the below code. This will forward the source IP.\nbackend default { .host = \u0026#34;127.0.0.1\u0026#34;; .port = \u0026#34;8080\u0026#34;; } sub vcl_recv { remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } Apache Custom Log #We need to create a custom log to deal with the information from Varnish.\nCreate the following file\n[root@server ~]# vi /etc/httpd/conf.d/varnish-log.conf with the following content\nLogFormat \u0026#34;%{X-Forwarded-For}i %l %u %t \\\u0026#34;%r\\\u0026#34; %\u0026gt;s %b \\\u0026#34;%{Referer}i\\\u0026#34; \\\u0026#34;%{User-Agent}i\\\u0026#34;\u0026#34; varnishcombined Update Web Hosts #You will now need to update the web hosts to state that the log format will be \u0026ldquo;varnishcombined\u0026rdquo; below is an example.\n\u0026lt;VirtualHost 127.0.0.1:8080\u0026gt; ServerName example.com ServerAdmin webmaster@example.com DocumentRoot /var/www/example.com/htdocs ErrorLog /var/www/example.com/logs/www.example.com.error.log #CustomLog /var/www/example.com/logs/www.example.com.access.log combined CustomLog /var/www/example.com/logs/www.example.com.access.log varnishcombined \u0026lt;/VirtualHost\u0026gt; As you can see from the example above, the old \u0026ldquo;CustomLog\u0026rdquo; is now commented out and the new \u0026ldquo;CustomLog\u0026rdquo; with the varnishcombined entry is active.\nRestart Services #Restart Apache #[root@server ~]# /sbin/service httpd restart Restart Varnish #[root@server ~]# /sbin/service varnish restart Set Auto Start #Auto Start Apache #[root@server ~]# /sbin/chkconfig httpd on Auto Start Varnish #[root@server ~]# /sbin/chkconfig varnish on Thats it you now have a working Apache Web Server fronted with a Varnish Web Cache.\n","date":"11 June 2019","permalink":"/posts/varnish-web-cache-on-centos/","section":"Posts","summary":"This technical article will walk you through setting up a Varnish web cache to cache your website.","title":"Varnish Web Cache on CentOS"},{"content":"","date":null,"permalink":"/tags/web/","section":"Tags","summary":"","title":"Web"},{"content":"","date":null,"permalink":"/tags/rhel/","section":"Tags","summary":"","title":"Rhel"},{"content":"Overview #This post describes how to enter single usermode for Redhat 7.\nModify Boot Settings #At the GRUB 2 type the \u0026ldquo;e\u0026rdquo; key to edit the current kernel line\nMove the cursor down to the kernel line, this is the line starting with linux16.\nOn this line remove the rgdb and quiet flags\nand then add the following rd.break enforcing=0\nrd.break will break the boot sequence at an early stage before the system boots fully. enforcing=0 puts SELinux into permissive mode.\nOnce you have made the edits above press Ctrl+x to resume the boot process using the new flags.\nThe system will continue to boot and you should be dropped into a command prompt if you entered the flags correctly.\nRemount Partitions #To edit the filesystem you have to remount it as read/write.\nswitch_root:/# mount -o remount,rw /sysroot then chroot to the mounted partition\nswitch_root:/# chroot /sysroot System Modifications #Now yo are free to make the modifications to your system, the example below shows you how to reset the root password which is a common reason to go into single user mode.\nChange The Root password #sh-4.2# passwd root Changing password for user root. New passwd: mypassword Retype new password: mypassword passwd: all authentication token updated successfully. sh-4.2# exit exit switch_root:/# exit logout ","date":"9 July 2018","permalink":"/posts/single-usermode-rhel7/","section":"Posts","summary":"This technical article describes how to get into single user mode on Redhat 7 OS.","title":"Single Usermode RHEL7"},{"content":"","date":null,"permalink":"/tags/cloud/","section":"Tags","summary":"","title":"Cloud"},{"content":"Businesses are continually looking for ways to use cloud computing to reach their goals. With so many great benefits to offer, cloud computing is definitely become the way of the future. During 2017 we’ve seen an increase in cyber attacks such as the WannaCry ransomware and CIA Vault 7 hack, making it even more important to ensure that security remains one of the most important features of cloud computing.\nAnother important factor is cost; cloud storage prices are also falling, allowing for a new era to emerge during the next few months. When we look at new cloud technology trends in 2018, here are the ones you should watch:\nContainer Orchestration with Kubernetes #One of the most talked about technologies is undoubtedly the role Kubernetes will play in cloud computing in 2018. Kubernetes – much like Docker for containers – has become the cloud orchestrator of choice. Kubernetes can be used by developers to easily migrate and manage software code.\nKubernetes has been adopted throughout the industry, including Docker and Microsoft Azure, showing just how effective this open-source container orchestration system is. It provides simpler cloud deployment and efficient management.\nCloud Cost Containment #With the recent announcement from AWS that they will be providing per-second billing for EC2 instances, other providers are also expected to announce updated pricing plans. In general, it is much easier to calculate the cost for single cloud provider as opposed to calculating the cost in a multi-cloud environment. Multi-cloud environments are difficult because there are different pricing plans for cloud providers. With different cloud service pricing and consumption plans available, pricing can vary greatly between providers.\nServerless Architecture #One of the great benefits of cloud computing is the ability to use extra resources and pay for what you use. This model allows for a VM, or instance, to be a unit for an additional compute resource. This means a ‘function’ has become an even smaller unit of use. It’s cost-efficient for the cloud provider to manage and scale resources on demand in the cloud, reducing all the heavy lifting that was usually required. There is a limitless supply of virtual machines, so there are no upfront costs and a lot of flexibility exists.\nCloud Monitoring as a Service (CMaaS) #Another popular trend that comes from the growing demand for hybrid cloud solutions is Cloud Monitoring as a Service (CMaaS). CMaaS is used to monitor the performance of multiple servers that are interdependent to the service delivery of a business. These services should be independent of the providers themselves and it can be used to monitor in-house environments and host various cloud services by installing gateways to the environment.\nCloud Facilitation for IoT #Gartner Research predicts that there will be around 20 billion mobile devices worldwide by 2020. With so many devices around, the cloud will play a much more significant role. You’ll also need more space to store data such as documents, videos and images, which all help drive the need for IoT in so many ways. We should see a lot of development towards IoT in 2018.\nMulti Cloud Strategy #Multi cloud strategies will become a dominant factor in 2018. It allows organizations to deploy different workloads and separately manage them. International Data Corporation predicts that more than 85 percent of enterprise IT corporation will adopt multi cloud technology by 2018. Organizations can save significantly be adopting a multi cloud strategy as they won’t be locked in with only one provider. Enterprises can save millions per year.\nThe Popularity of Cloud Based Big Data Mining #Many companies are launching IoT applications in 2018 and they will rely heavily on big data generated from these applications. However, they don’t necessarily have a great way to mine the data, which is where cloud technology comes in. Cloud based big data mining will definitely see an increase this year, helping companies to use the data from their applications.\nProactive Cloud Analytics with AI #AI can be seen in many areas of our lives; just look at digital assistants like Siri and Cortana, as they all use AI to provide useful information and execute tasks. Companies will incorporate AI into their analytics streams to make proactive business decisions so that they can automate their response and allow for actionable information and recommendations.\nCloud Security Will Remain a Priority #Cloud computing is still emerging and as such, requires a different approach to security than traditional IT infrastructures. In 2018, cloud security will be more important than ever and this offers a great opportunity for cloud solution providers to come up with a robust security solution that is effective for their customers.\nDuring 2018 we will definitely see cloud becoming more strategic, with the help of a few great technologies. It is expected that the adoption of the services above will help to increase performance and automation in terms of cloud computing.\n","date":"18 June 2018","permalink":"/posts/cloud-technology-to-watch-2018/","section":"Posts","summary":"In this article we look at some cloud technology to keep an eye on in 2018.","title":"Cloud Technology to watch in 2018"},{"content":"","date":null,"permalink":"/tags/technology/","section":"Tags","summary":"","title":"Technology"},{"content":"","date":null,"permalink":"/tags/blog/","section":"Tags","summary":"","title":"Blog"},{"content":"","date":null,"permalink":"/tags/hugo/","section":"Tags","summary":"","title":"Hugo"},{"content":"Why Hugo? #In a previous post I mentioned that I am moving to Hugo from Wordpress. One of the main reasons for this is to be able to store my blog in Github to allow for version control.\nAutomating Hugo #One thing that I missed from Wordpress was the automated way that it works. In Wordpress you write a draft post, add some images and then publish, thats it. For Hugo you create a post, then use Hugo cli to generate the static content, then upload this to a web server and then its published for the world to see. Too many manual steps that makes life difficult.\nHigh Level Automation Workflow # Screensot of Hugo WQorkflow High Level Automation Steps # Create articles/posts in markdown (Local) Generate the static HTML (Local) Push static HTML to Github (Remote) Github fires a webhook to my web server (Remote) Webhook invokes a pull of the static content from Github (Server) Automated pull of repository Static content is served from the server (Server) Setup Steps #Create a Github Repository #Create a Github repository for the public folder that is generated by hugo.\nScreensot of Repo Creation Creation Create a Webhook #Create the webhook within the repository you just created, this will fire when new code is pushed to this repository.\nScreensot of Webhook Creation Setup the Webhook Server #Use webhook server for a lightweight webhook server and install this on the webserver.\nCreat the hooks.json below this has the configuration for the webhook.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [ { \u0026#34;id\u0026#34;: \u0026#34;deploy-public\u0026#34;, \u0026#34;execute-command\u0026#34;: \u0026#34;/somepath/deploy-public.sh\u0026#34;, \u0026#34;command-working-directory\u0026#34;: \u0026#34;/somepath\u0026#34;, \u0026#34;trigger-rule\u0026#34;: { \u0026#34;and\u0026#34;: [ { \u0026#34;match\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;payload-hash-sha1\u0026#34;, \u0026#34;secret\u0026#34;: \u0026#34;**********\u0026#34;, \u0026#34;parameter\u0026#34;: { \u0026#34;source\u0026#34;: \u0026#34;header\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;X-Hub-Signature\u0026#34; } } }, ] } } ] Bash Script #Next create a bash script deploy-public.sh to actually carry out the work of archiving the existing public folder and then replacing it with a cloned version from Github.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 #!/bin/bash #Name: deploy-public.sh #Set Vars LOGFILE=\u0026#34;/somepath/log.log\u0026#34; TIMESTAMP=`date \u0026#34;+%Y-%m-%d_%H%M%S\u0026#34;` DIRECTORY=\u0026#34;/somepath/virtuallytd.com\u0026#34; # Backup of current site if [ ! -d \u0026#34;${DIRECTORY}/archives\u0026#34; ]; then mkdir ${DIRECTORY}/archives fi cd ${DIRECTORY} tar -cf ./archives/public-${TIMESTAMP}.tar ./public gzip -7 ./archives/public-${TIMESTAMP}.tar rm -fR ./archives/public-${TIMESTAMP}.tar # Remove the old public site rm -fR ${DIRECTORY}/public # Clone the new public site git clone git@gitserv:virtuallytd/blog-public.git ./public Start Webhook Server #With all the above in place you should be able to start the webhook server and have it listen for connections.\nThe verbose flag is set for testing the setup. Webhook server will bind to port 9050 on the external IP you set. This can also be proxied as not to expose the service externally.\n/somepath/webhook -hooks /somepath/hooks.json -verbose -ip \u0026lt;External IP\u0026gt; -port 9050 Testing #If you test a connection and all is working well you should see some output like this from the webhook command and the public folder should have been updated and an archive created.\n[root@server ~]# /usr/local/bin/webhook -hooks /etc/hooks.json -verbose -ip \u0026lt;External IP\u0026gt; -port 9050 [webhook] 2018/06/17 20:44:56 version 2.6.8 starting [webhook] 2018/06/17 20:44:56 setting up os signal watcher [webhook] 2018/06/17 20:44:56 attempting to load hooks from /somepath/hooks.json [webhook] 2018/06/17 20:44:56 found 1 hook(s) in file [webhook] 2018/06/17 20:44:56 loaded: deploy-public [webhook] 2018/06/17 20:44:56 serving hooks on http://\u0026lt;External IP\u0026gt;:9050/hooks/{id} [webhook] 2018/06/17 20:44:56 os signal watcher ready [webhook] 2018/06/17 20:45:11 [xxxxxx] incoming HTTP request from \u0026lt;External IP\u0026gt;:42138 [webhook] 2018/06/17 20:45:11 [xxxxxx] deploy-public got matched [webhook] 2018/06/17 20:45:11 [xxxxxx] deploy-public hook triggered successfully [webhook] 2018/06/17 20:45:11 200 | 644.658µs | \u0026lt;External IP\u0026gt;:9050 | POST /hooks/deploy-public [webhook] 2018/06/17 20:45:11 [xxxxxx] executing /somepath/deploy-public.sh (/somepath/deploy-public.sh) with arguments [\u0026#34;/somepath/deploy-public.sh\u0026#34;] and environment [] using /somepath as cwd [webhook] 2018/06/17 20:45:13 [xxxxxx] command output: Cloning into \u0026#39;./public\u0026#39;... [webhook] 2018/06/17 20:45:13 [xxxxxx] finished handling deploy-public If you have any issues with this make sure to check the logging from the webhook server and also check in Github under the webhook page for any responses/errors.\nAutomate Pull from Github #To create an automated pull of data from the github repository we need to configure a deployment key. This key will allow a git pull (Read Only).\nCreate the SSH key #Generate an SSH Key on the server\n[root@server ~]# ssh-keygen -t rsa -C \u0026#34;deploykey@example.com\u0026#34; Save the key somewhere on the system.\nConfigure SSH Credentials #Edit the file [root@server ~]# vi /root/.ssh/config\nAdd the following lines Host gitserv Hostname github.com User git IdentityFile /root/.ssh/id_rsa IdentitiesOnly yes\nAdd Public Deploy Key to Github #Open your repository and go into settings \u0026gt; Deploy Keys.\nIn here add the public key of the keypair we generated in the step before and click save.\nNow when the script we created earlier invokes a git pull, it will use this configuration and use the deploy ssh key to connect to github.\nManaging the Webhook Service #To efficiently manage the webhook server, a systemd service can be created. This allows the server to start and stop the webhook service automatically.\nCreating a systemd Service #Create a file named webhook.service in the /etc/systemd/system/ directory with the following content:\n[Unit] Description=Webhook Service After=network.target [Service] Type=simple ExecStart=/usr/local/bin/webhook -hooks /etc/hooks.json -verbose -ip \u0026lt;External IP\u0026gt; -port 9050 ExecStop=/usr/local/bin/stop_webhook_script.sh Restart=on-failure [Install] WantedBy=multi-user.target Creating the Stop Script #Create a script stop_webhook_script.sh to stop the webhook service:\n#!/bin/bash # Find and stop the webhook process PID=$(ps -ef | grep \u0026#39;/usr/local/bin/webhook\u0026#39; | grep -v grep | awk \u0026#39;{print $2}\u0026#39;) if [ ! -z \u0026#34;$PID\u0026#34; ]; then kill $PID echo \u0026#34;Webhook service stopped.\u0026#34; else echo \u0026#34;Webhook service is not running.\u0026#34; fi Place this script in /usr/local/bin and make it executable with chmod +x /usr/local/bin/stop_webhook_script.sh.\nManaging the Service #Enable the service to start on boot with sudo systemctl enable webhook.service. Start it with sudo systemctl start webhook.service and stop it with sudo systemctl stop webhook.service.\n","date":"17 June 2018","permalink":"/posts/hugo-deployment-automation/","section":"Posts","summary":"In this technical article we look at the process of automating a Hugo deployment from a Github commit.​","title":"Hugo Deployment Automation"},{"content":"","date":null,"permalink":"/tags/ai/","section":"Tags","summary":"","title":"Ai"},{"content":"Cloud computing already plays an important role in our modern lives, but recent developments in artificial intelligence (AI) coupled with the improvements in programming promises a whole new age of cloud computing. We’ll take a closer look at how that technology is quickly emerging and how it will have an impact on our daily lives.\nEvery person with technical knowledge knows that cloud technology brings huge potential and that it has already influences how businesses and people store data and process information. But because cloud technology is fairly new, companies have to think about how it will evolve over time. Things like the rise of mobile technology and the Internet of Things (IoT) have resulted in changes to the cloud - but now there’s something new on everybody’s lips: artificial intelligence. It could improve cloud technology in so many ways.\nWhen IBM spoke about the combination of AI and cloud, they said that it\nPromises to be both a source of innovation and a means to accelerate change.\nThe cloud can help to provide AI with the information it needs to learn, while AI can provide the cloud with more data. This relationship can help to completely transform how AI is developed and the fact that cloud companies such as IBM are spending a lot of time and resources into AI, shows that this is a real possibility.\nCloud technology is spread among a number of servers in various languages with huge data storage. Companies can use this to create automated solutions for their customers. Cloud computing is getting more powerful with AI, making it possible for companies to use AI cloud computing to reach long term goals for their customers.\nAnother important aspect of combining AI with the cloud, is that it can potentially change the manner in which the data was stored earlier and processed. This has huge potential and will allow professionals to look over the boundless possibilities for the future.\nAlthough cloud computing on its own has the capability to become a significant technology in many fields, the combination of cloud and AI will enhance it. Cloud computing will be much easier to scale and manage with the help of artificial intelligence. What’s more, the more businesses get on the cloud, the more it needs to be integrated with AI to remain efficient. There will come a point in time when cloud technology can’t exist without AI.\nA Deeper Understanding of AI #Artificial intelligence is much like an iceberg, as there is a lot more hidden that first meets the eye. AI is yet to show its true potential, and it is changing the world of computing together with cloud technology. In fact, it’s believed to be the future of computing.\nAI has the potential to further amplify the amazing capabilities of cloud computing, as it provides tremendous power. It allows machines to react and think like humans do, and helps machines to effectively analyze and learn from historical data, while identifying patterns and making real-time decisions. This may very well lead to an automated process that will virtually eliminate the possibility of human error.\nTech companies can now create AI which can learn. A good example of this is when an AI beat the world’s best Go player. How? By playing millions of games with itself and learning about strategies that players have not yet considered.\nOf course, AI has far better capabilities than just playing games. It is becoming a major player in conversation, where voice-activated AI systems can respond to human commands.\nWhile we are already enjoying assistants like Cortana which can respond to voice commands, tech companies are focusing on developing AI systems that can learn how to respond differently. There is still a lot to be done, but the goal is for an AI to communicate like a human.\nCombining AI and the Cloud #As mentioned, companies who specialize in either AI or cloud are dedicating more of their time and resources into learning both technologies and its capabilities. Basically, cloud AI technologies take one of two forms, it’s either a platform like Google Cloud Machine Learning, which combines machine learning with the cloud, or they are AI cloud services such as IBM Watson.\nWired recently reported on how companies are relying on IBM Watson to help fight cybercrime. But it’s not as simple as simply plugging in the technology and letting it work; Watson has to be taught how to deal with hackers and cyber criminals, and it becomes more effective over time as it stores information.\nIt\u0026rsquo;s interesting to note that while Watson knows so much, and can read far more reports than humans can, it still makes odd mistakes. That’s why researchers are helping Watson and guiding it to think correctly and eventually make no mistakes. At this point in time, AI, cloud, and humans all need each other in some way.\nBy combining AI and the data stored with technology, both AI and humans can analyze more and gather more data than ever before. Tech experts have indicated that this may be the year when AI becomes a significant role player in our daily lives and that its capabilities will only be improved with the development of cloud technology.\n","date":"13 June 2018","permalink":"/posts/does-ai-have-a-future-in-cloud-computing/","section":"Posts","summary":"This article discusses the whether AI has a future in cloud computing","title":"Does AI Have A Future in Cloud Computing"},{"content":"","date":null,"permalink":"/tags/containers/","section":"Tags","summary":"","title":"Containers"},{"content":"","date":null,"permalink":"/tags/virtualization/","section":"Tags","summary":"","title":"Virtualization"},{"content":"One of the most popular topics these days concerns containers, and what their role is. Containers have become increasingly important recently, mainly thanks to Docker. Various major providers such as IBM, VMware and Amazon Web Services have all embraced containers with open arms. As a result, this discussion has become a very popular topic and people are asking whether containers will be taking over and replace virtual machines.\nWhat Are Containers? #Containers essentially aren\u0026rsquo;t new, as they became popular a few years ago when Docker unveiled a new way to manage applications simply by isolating specific codes. This refers to a piece of lightweight software that has everything required to successfully run an application. Multiple containers can run on the same operating system and share resources.\nContainers are a hot topic these days, as the world’s top IT companies are using them. They promise a streamlined method of implementing infrastructure requirements, and they also offer a great alternative to virtual machines. In short, if anything goes wrong in the container, it only affects that single container, and not the whole server.\nWhat Are Virtual Machines? #A virtual machine refers to an operating system that fulfills various functions on software instead of hardware. A hypervisor can abstract applications from the specific computer, which allocates resources such as network bandwidth and memory space, to multiple virtual machines. With this technology, service providers can increase network functions running on expensive nodes automatically. Hypervisors work to separate an operating system and applications from the physical hardware. They allow the host machine to operate various virtual machines as guests and thereby maximize the use of resources such as network bandwidth and memory.\nHypervisors metaphorically died when Intel launched their Intel-VTx chip. Before this, Xen and VMware had two different ways in approaching hypervisor capabilities, namely paravirtualization and binary translation. Arguments were held about which was best and faster than the other, but as soon as Intel VTx came along, it was the winner and both Xen and VMware started using this chip.\nAs we move towards cloud applications there is a need to standardize underlying operating systems as you can’t get the same efficiency when you run 10 different operating systems. Whether you are moving towards PaaS or containers, either way, you are slowly moving away from heterogeneity.\nWhy Are Containers So Popular? #In general, containers are much more effective than virtual machines, simply because of the way in which they allocate resources. Containers run in an isolated environment and they have all the necessary resources to run an application. The remaining resources that are not used, can be utilized to run other applications, and as a result, containers can run two or three times as many applications as an individual server. Apart from increasing the efficiency of a system, this technology also allows us to save money by not having to invest in more servers in order to handle multiple processes.\nAnother reason why containers are seen as supporting virtual machines, is the fact that they can handle a quicker boot up process. With a typical virtual machine taking up to around a minute to boot, a container can do this in a micro second.\nPaaS tools such as Cloud Foundry, and systems such as Mesos and Kubernetes are already designed to scale your workload drastically as they detect performance failures and take various proactive steps to deal with them.\nContainers have a minimalist structure and that is a key differentiator. Unlike virtual machines, they don’t need a full operating system installed in the container, and don’t need a copy of the hardware. They operate with the minimum amount of resources and they are designed to perform the task they were designed for. A container’s ephemeral nature is another distinguishing characteristic. Containers can be installed and removed without any major disruption to the system. If an experiment should fail, the newer version can be rolled back and replaced. This is a new way of managing a data center and it’s key to the overwhelming interest that technology companies have expressed in Docker and its associated technologies recently.\nVirtual Machines Are Still Useful #Even though containers have many advantages to offer over virtual machines, they are not without fault. One of the biggest issues that comes with containers is its security. Because of the fact that containers use the same operating system, a security breach can occur much easier. A security breach can allow access to the entire system, in comparison to virtual machines. Also, since many container applications are available online, it opens up the window for additional security threats. If the software is infected with malware, which has the ability to spread to the entire operating system.\nSince containers have their advantages and disadvantages, it’s safe to say that virtual machines are not going anywhere – yet. They will likely not replace virtual machines completely, as these technologies complement each other rather than replacing each other. Hybrid systems are currently being develop to utilize the best advantages of both.\n","date":"4 March 2018","permalink":"/posts/will-hypervisors-be-replaced-by-containers/","section":"Posts","summary":"This article discusses if hypervisor technology will be replace containers technology.","title":"Will Hypervisors Be Replaced By Containers"},{"content":"","date":null,"permalink":"/tags/architecture/","section":"Tags","summary":"","title":"Architecture"},{"content":"","date":null,"permalink":"/tags/infrastructure/","section":"Tags","summary":"","title":"Infrastructure"},{"content":"Hyperconvergence refers to a framework that combines networking, computing and storage into one system in an effort to reduce the complexity of data centers and to increase scalability. Hyperconverged platforms include a hypervisor for virtualized networking and computing, and typically run on basic server systems.\nThe term hyperconverged infrastructure was coined by Forrester Research and Steve Chambers in 2012 to describe an infrastructure that virtualizes all the elements of a conventional system. This infrastructure typically runs on standard off-the-shelf servers.\nToday, companies typically use this infrastructure for virtual desktop infrastructure, remote workloads, and general-purpose workloads. In some cases, companies use it to run high performance storage, mission critical applications, and server virtualization.\nThe Benefits #The benefits of hyperconvergence include the fact that it is a hardware-defined system that is geared toward a purely software-defined environment where every element runs on commercial servers. The convergence of elements is facilitated by a hypervisor. These systems are made up of direct-attached storage and includes the ability to plug and play into a pool of data-like systems. All physical resources reside on one platform for software and hardware layers, and as an added benefit, these systems eliminate the traditional data-center inefficiencies and reduces total cost of ownership.\nThe servers, storage systems and networking switches are all designed to work together as one system, so it increases ease of use and improve efficiency. Companies can start small and grow bigger as scalability will always be an added benefit. It will also lead to cost savings in terms of power and space, and the avoidance of licensed backup and recovery software.\nThe potential impact is that companies will no longer need to rely on various different storage systems, and it will likely further simplify management and increase resource utilization rates.\nThere is always pressure on an IT department to provide resources instantly, data volume growth is unpredictable, and software defined storage promises great efficiency gains. These are just some of the trends taking place, which is some of the reasons why hyperconverged infrastructure has become so popular in recent years.\nHow Does Hyperconvergence Differ From Converged? #One major difference is that hyperconvergence adds more levels of automation and deeper levels of abstraction. This infrastructure involves preconfigured software and hardware combined in a single system with simplified management.\nWhere legacy systems relied on separate storage, networks and servers, hyperconvergence allows for the simplicity and reliability of using one single system. This also reduces the risk of failure as silos created by traditional infrastructure present barrier to progress and change.\nThis technology will simplify datacenter operations by streamlining deployment, management, and scaling of resources. This is achieved by combining the server and storage resources with intelligent software. Separate servers and storage networks can be replaced with a single solution to create a scalable, agile datacenter solution.\nThe Components Of Hyperconverged Solutions #There are several components that form a hyperconverged solution, including:\nA Distributed Data Plane: This runs across a collection of nodes and deliver networking, virtualization and storage services for applications. This can either be container-based applications or VMs. A Management Plane: This allows for easy administration of all resources with the help of a single view and also eliminates the need for separate servers, virtualization, and storage network solutions. Almost all modern hyperconverged solutions are 100 percent software defined. There is no dependency on hardware, as each cluster runs a hypervisor – such as VMware, Microsoft Hyper-V or Nutanix AHV. How Is It Sold #Hyperconverged technology is available as a software-only model, a reference architecture, or an appliance. You can expect bundled capabilities such as data deduplication, data protection, snapshots, compression and WAN optimization, as well as disaster recovery and backup as part of the vendor’s offering.\nThere are various specialist vendors that include SimpliVity, Nutanix and Pivot3. There are also a few big system vendors that entered the market, such as Dell-EMC, Cisco and HPE. The market for hyperconverged integrated systems (HCIS) is predicted to reach nearly $5 billion by 2019, which represents 24 percent of the overall market, as technology moves to mainstream use.\nAt the Gartner Infrastructure, Operations \u0026amp; Data Center Summit in Australia, Andrew Butler, vice president at Gartner, said\nThis evolution presents IT infrastructure and operations leaders with a framework to evolve their implementations and architectures.\nHe believes that HCIS is not a destination, but an \u0026ldquo;evolutionary journey\u0026rdquo;.\nThe cost of such an infrastructure can vary dramatically, depending on the underlying hypervisor. It depends on the licensing built in, as well as other costs involved in configuring the software for use in a specific environment. Due to the fact that storage is a software service, there is no need for expensive hardware infrastructure, which is an added benefit.\nBuilding a hyperconverged system in a corporate environment is more than just replacing a few devices it requires various aspects and all kinds of IT staff to support it.\nSoftware defined data center solutions manager at Hewlett-Packard, Niel Miles, described \u0026ldquo;software defined\u0026rdquo; as programmatic controls of a company’s infrastructure as it moves forward. Existing technology cannot keep up with the changes, requiring additional software.\nIn Conclusion #Although the concept is only about five years old, there are a few fundamental differences between hyperconverged infrastructure and converged infrastructure. It’s the latest step In pursuing an infrastructure that is easy and cost-effective to manage, and allows you to tidy up a datacenter infrastructure completely.\n","date":"27 February 2018","permalink":"/posts/what-is-hyperconverged-infrastructure/","section":"Posts","summary":"This article looks at Hyperconverged Architecture, what it is and how it can help.","title":"What is Hyperconverged Infrastructure"},{"content":"When it comes to cloud servers and old vs new technology, the concept was usually a difficult one to grasp – until experts started using the popular analogy of pets vs cattle. It helped to perfectly explain the old technology vs the new, and how you can differentiate between the two. It was a vital tool to understand the cloud, and the new way of doing things.\nWith so many confusing terminology and concepts to keep track of, this analogy aims to set the record straight and offer an accurate reference that everyone can use.\nThe Background #Back in 2011, cloud pioneer and member of OpenStack Foundation, Randy Bias, struggled to explain how cloud native apps, AWS, and cloud in general was very different from what it was before. Since most explanations took a lot of time, he wanted something simple and effective, and he did some research – until he came upon a presentation by Bill Baker, where he was focusing mainly on ‘scale-out’ and ‘scale-up’ architectures in general.\nBut most importantly, Bill used the context of comparing pets with cattle when he talked about ‘scale-up’ and ‘scale-out’ technology. When you put pets and cattle in the context of cloud, and focus on the fact that pets are unique and cattle are disposable, it makes a lot of sense.\nIn short, if you see a server as being replaceable, it’s a member of the herd. But if you see a server as indispensable (for e.g. a pair of servers working together as a single unit), it’s a pet. Randy explains it best\nIn the old way of doing things, we treat our servers like pets, for example Bob the mail server. If Bob goes down, it’s all hands on deck. The CEO can’t get his email and it’s the end of the world. In the new way, servers are numbered, like cattle in a herd. For example, www001 to www100. When one server goes down, it’s taken out back, shot, and replaced on the line.\nThis is basically the pitch he would use, word for word.\nUnderstanding Pets and Cattle #Let’s take a minute to clearly define pets and cattle. When we talk about pets, we refer to servers that are seen as irreplaceable, or unique, and basically a system that cannot ever be down. These are typically manually built and managed, and also ‘hand fed’. Some examples can be solitary servers, firewalls, database systems and mainframes.\nWhen we talk about cattle, we refer to collections of more than two servers that are built with automated tools and designed to fail at some point. During failure of these servers, human intervention is not needed as they can route around failures by restarting failed servers or simply replacing them. Some examples of these servers include multi-master datastores, web server arrays, and basically anything that is load balanced. The key to remember here is that failures can and will happen, so every server and every component should be able to fail without impacting the system.\nThe concept has been around for quite a while, as Yale computer scientist David Gelemter used it to explain file systems. He said\nIf you have three pet dogs, give them names. If you have 10,000 head of cattle, don’t bother.\nThis explanation has helped educate various IT professionals, giving them the tools to further explain the old vs the new.\nExpanding on the Analogy #It’s important to stick to the explanation above, or at least start with it, before moving to your own adaptation. Some people have expanded on this analogy and made their own unique version to explain their point – which is perfectly fine – but it can create a bit of confusion.\nHere’s an example, used by the Kubernetes team to explain their \u0026ldquo;Pet Sets\u0026rdquo; addition to their functionality. While they understandably took the pets vs cattle analogy and interpreted it to explain their stateful applications, it was a bit confusing for some. Particularly because they used examples of stateful applications supported in Kubernetes 1.3 using Pet Sets, which are cattle-architecture systems. They are all designed for failure, and by their definition, they now use cattle data stores using Pet Sets.\nIt is important that we don’t confuse people when they try to understand the new technology, how it works and why it is important.\nGetting Value from the Analogy #If you want to take the pets vs cattle analogy and amend it to suit your specific needs, you are certainly free to do so. But just understand where it comes from, how it is used, and how it can help people to understand the complex principle of modern server architecture. It might be a good gesture to acknowledge where the analogy came from and where you draw your inspiration, by referring back to the original blog post for reference and the true history.\nUltimately, focusing on the fact that servers are disposable – a fact that Google actually pioneered – is a very important fact for the pets vs cattle analogy. Using this and focusing on another aspect, or describing something that it is not intended to explain, can add mud to the water and confuse some people on the issue at hand.\nIn Conclusion #By understanding and accurately representing the true origins of this analogy, we will maintain its value to those new to the concept of how computing is now delivered. Cloud technology is undoubtedly the way of the future, and explaining this correctly will make all the difference.\n","date":"20 February 2018","permalink":"/posts/pets-vs-cattle-analogy-explained/","section":"Posts","summary":"This article explains the Pets vs cattle analogy when describing server infrastructure in IT.","title":"Pets vs Cattle Analogy Explained"},{"content":"Serverless architecture is often referred to as Function as a Service (FaaS) or serverless computing, and it is widely used for applications that are deployed in the cloud. With serverless architecture, there is no need for server hardware and software to be managed by the developer, as these applications are dependent on third party software.\nIn a serverless environment, applications divided into individual functions, and these can be scaled and invoked individually. It’s a powerful solution for many application developers, but it’s important to understand exactly what it is, and what the possible vulnerabilities can be.\nServerless technology is already a popular topic in the software world, and there are many vendor products, books and open source frameworks dedicated to this. Its use has become very popular solution for many organizations deploying cloud applications, with even some of the traditionally conservative organizations using some form of serverless technologies.\nThis software trend delivers the scaling necessary and reduces time-to-market for a reliable, effective application platform. Just think Uber, Airbnb and Instagram – they all have large user databases and real-time data that functions seamlessly due to serverless architecture. And between Google’s Play Store and Apple’s App Store, there are more than four million apps competing for attention, making serverless architecture a great way to gain a competitive advantage and reduce development costs, which can easily top six figures. The term ‘serverless’ has received some backlash, as it implies that there are no servers at all, but in fact that are naturally still servers running in the background. The difference is that they are managed by vendors but you don’t have access to change or manage them. That’s also why many feel it should be referred to as Function as a Service.\nThe Benefit of Serverless Architecture #When you think of software applications being hosted on the Internet, it usually means that you need to have some sort of server infrastructure. This typically means either a physical or virtual server that needs to be managed, including all the different hosting processes and operating system that it needs for your application to run. Using a virtual server from providers such as Microsoft or Amazon, you can eliminate any hardware issues, but you’ll still have to manage the server software and operating system.\nWhen you move to serverless architecture, you focus only on the application code’s individual functions. Popular services like Microsoft Azure Functions, AWS Lambda and Twilio Functions all take care of the physical hardware, the web server software, and the operating system This means you only need to focus on the code.\nHere are a few great benefits of using serverless architecture:\nBetter scalability. Developers all want their apps to be successful, but if it does happen, they need to make sure they can handle it. That’s why provisioning infrastructure is a great choice to make, as you will be prepared when success strikes.\nReduce time to market. Developers can now create apps within days or even hours, instead of weeks and months. There are many new apps that rely on third-party APIs including social channels like Twitter, maps like Mapbox, and authentication like OAuth.\nLower developer cost. Serverless architecture significantly reduces the need for human resources and computing power. Servers don’t need to be so expensive anymore; plus, if you don’t need always-on servers, your running costs will reduce even more.\nServerless architecture also allows for faster innovation, and this means product engineers can innovate at a rapid speed since this technology reduces any system engineering problems. This means less time for operations, and a smoother application. Product engineers can now rather focus their attention on developing the business logic of the application.\nHaving access to out-of-the-box scalability is one of the major reasons why developers use serverless architecture. Costs are kept to a minimum, as you are basically only paying when something happens, i.e. a user takes a certain action. Generally speaking, this is a great solution for most developers looking for a cost-effective solution.\nPossible Drawbacks #Serverless architecture remains one of the best technologies yet, but it’s worth noting that it may in some cases have slight drawbacks that developers should be aware of.\nHere are a few aspects to consider:\nComplex architecture. It might be challenging to manage too many functions simultaneously, especially since it can take time to decide how small every function should be. There needs to be a balance to the amount of functions that can be called by an application. AWS Lambda, for example, has limits as to how many concurrent executions you can run of your lambdas.\nNot enough operational tools. Developers rely on vendors to provide monitoring and debugging tools. Debugging systems can be difficult, and will require access to a lot of relevant information to help identify the root cause.\nImplementation testing. Integration tests can be tough to implement. The units of integration, or function, is smaller than with other architectures, and this means developers rely much more on integration testing that with other architectures. There can also be problems with versioning, deployment and packaging.\nThird-party API system problems. Some of the problems due to the use of third-party APIs can include vendor lock-in, multi-tenancy problems, vendor control, and security issues. Giving up system control while APIs are implemented can cause loss of functionality, system downtime and unexpected limits.\nIn Conclusion #With serverless technology, applications can be built faster, and scaled more effectively. Additional computing power can be assigned automatically, and there is no need for developers to monitor and maintain complex servers.\nServerless architecture can accommodate a wide range of developing needs. From connected aircraft engines to file-sharing apps - data continues to grow and evolve, and serverless will become the standard in development and execution of various functions.\nBy significantly reducing development and management costs, serverless architecture is set to completely take over the software architecture space.\n","date":"1 February 2018","permalink":"/posts/what-is-serverless-architecture/","section":"Posts","summary":"This article describes what serverless architecture is and how it can be used.","title":"What is Serverless Architecture"},{"content":"Over the last few years, Docker has relied on their own container management system to not only form the roadmap of their company, but also attract high dollar investors. But this all changed as the company announced their support of Kubernetes at DockerCon Europe 2017 in Copenhagen. With Docker being the leading platform for software containerization, this announcement shows just how valuable Kubernetes are in the container orchestration space.\nDocker has always focused on the developer, offering the ability to use a standard framework to build, ship and run applications. Their primary platform to orchestrate containers is Docker Swarm., which also offers a close integration with Docker Enterprise Edition. With the integration of Kubernetes, Swarm offers value-added capabilities above Kubernetes.\nOrganizations will now be able to make use of Kubernetes, while still relying on Docker’s various management features, including security scanning. In addition to Windows and Linux, the system will also be compatible with a variety of Docker-certified container images.\nDocker and Kubernetes have been competing against each other since 2015, making this move even more genius. In 2016, Docker partnered with Microsoft and brought its container runtime to the Azure cloud platform, gaining a lot of Windows platform support.\nSo, Why Kubernetes? #Kubernetes, also referred to as k8s, was originally developed by Google, and is now hosted by the Cloud Native Computing Foundation (CNCF). It’s an open source platform that aims to enhance cloud native technology development by using a new set of container technologies. With Kubernetes, you can deploy and schedule container applications in both virtual and physical environments, making it a leading container orchestration engine.\n“We’re embracing Kubernetes into our product line. We’re bringing Kubernetes into Docker Enterprise Edition as a first-class orchestrator right alongside Docker Swarm,” said Scott Johnston, Chief Operating Officer of Docker. He also mentioned that they will be integrating Kubernetes into their Mac and Windows products. Steve Singh, Chief Executive Officer of Docker, believes that embracing Kubernetes will rule out potential conflicts, and that they want customers to have a choice between using Swarm or Kubernetes, or both. \u0026ldquo;Our hope is that every application company in the world builds and delivers their products on the Docker platform in Docker containers,\u0026rdquo; Singh said.\nBut Kubernetes offer far more; it has many capabilities specifically for orchestration, including load balancing, service discovery and horizontal scaling. It also gives organizations the ability to have a flexible platform to execute their workloads in the cloud, or on-site, without the need for any application layer changes. Kubernetes also has a very large developer community, making it one of the fastest growing open source projects in the world.\nContainer Technology is Growing #Container technology is growing rapidly every year, with the market expecting to grow around 40 percent every year, to an impressive $2.7 billion by 2020. Experts believe that a big factor in this growth potential is the fact that organizations are incorporating containers specifically due to their portability, which reduces costs and offers better infrastructure utilization.\nKubernetes are fast becoming the central container orchestration engine for various leading cloud providers such as IBM, Google, Pivotal, Oracle, Microsoft, and Red Hat. Most industry leaders in Platform-as-a-service (PaaS) and Infrastructure-as-a-Service (IaaS) have also joined CNCF, making Kubernetes part of their service offering.\nGoing forward, every six months Kubernetes will be updated, beginning with version 1.8 that is included in Docker’s Enterprise Edition. For desktop users who often use Docker, the Windows and Mac versions will be taken directly from the master, ensuring that features are always developed in a timely manner without any complications.\nIt’s also interesting to note that Solomon Hykes, one of the founding members of Docker, is also on the technical committee of the CNCF, the group that manages containerd, Linkerd, and Kubernetes, along with a few other popular container-focused projects. Hykes is a founding member and has been contributing to various CNCF projects.\n“We’re already active. With this announcement, that’s going to continue and accelerate. We intend to be first class citizens and participate as full class members,” said Johnston.\nJohnston also noted that they are working with a security team that was acquired from Square a few years ago, and they are handling most of the security work for Docker Enterprise Edition. The team constantly improves the overall security of the platform, and will continue to do so.\nSwarm and Kubernetes: Side by Side #Docker decided to provide a design that allows for the simultaneous running of Swarm and Kubernetes in the same cluster. When Swarm is deployed, an option is provided to also install Kubernetes, which will then take on the redundancy design of the Swarm install.\nHykes said that developers who use Docker won’t have to learn new tools for Kubernetes. Rather, complete Kubernetes distribution will be built-in with the next version of Docker, allowing developers to use the same tools they have always used.\n\u0026ldquo;You can just keep developing and it just works, and if you do want to use Kubernetes tools, Docker is a good distribution, so you get the best of all worlds,\u0026rdquo; Hykes said.\nWhen looking at resources, it might be challenging as both Swarm and Kubernetes can run on one host, each being unaware of the other. This means that each orchestrator will adapt complete use of a single host, which is why Docker does not recommend that both be run on the same host.\nWith Kubernetes now being the modern standard for container orchestration, Docker made the perfect decision to support Kubernetes. Instead of competing, it is embracing the technology and offering their clients exactly what they want – developer tools that are easy to work with.\nIn Conclusion #This is definitely a very important moment for the container ecosystem, as Docker remains a leader when it comes to container-based development. With availability expected Q1 2018, and integration of Kubernetes with Docker EE, Docker is not only a leading development platform, but also serves as a production-level platform that can compete with PaaS solutions.\n","date":"25 January 2018","permalink":"/posts/docker-embracing-kubernetes/","section":"Posts","summary":"In this article we look at how Docker has embraced Kubernetes.","title":"Docker Embracing Kubernetes"},{"content":"","date":null,"permalink":"/tags/certificates/","section":"Tags","summary":"","title":"Certificates"},{"content":"","date":null,"permalink":"/tags/ssl/","section":"Tags","summary":"","title":"Ssl"},{"content":"This document will guide you through creating a Certificate Signing Request (CSR) with Subject Alternative Names (SAN).\nGetting Started #These instructions have been run on a RHEL Linux system.\nSAN stands for \u0026ldquo;Subject Alternative Names\u0026rdquo; and this helps you to have a single certificate for multiple CN (Common Name). In SAN certificate, you can have multiple complete CN.\nFor example:\nexample.com exmaple.net example.org You can have the above domains and more in a single certificate. One use case for this is loadbalancing, the Virtual IP could be the CN and then the hosts behind the LB would be the SAN entries.\nNext we look at a real life example of wikipedia.org, which has many SAN entries in a single certificate.\nScreensot of Wikipedia SAN As you can see in the screenshot there are multiple SAN entries for the wikipedia.org URL.\nPrerequisites #A working installation of OpenSSL [root@server ~]# yum install openssl\nCreate CSR Config #Create a directory to hold the CSR, Key and eventually the Certificate [user@server ~]$ cd /tmp [user@server ~]$ mkdir /tmp/san_cert [user@server ~]$ cd /tmp/san_cert\nCreate a file called san.cnf [user@server ~]$ touch /tmp/san_cert/san_cert.cnf [user@server ~]$ vi /tmp/san_cert/san_cert.cnf\nAdd the following content to the /tmp/san_cert/san_cert.cnf file [ req ] default_bits = 2048 distinguished_name = req_distinguished_name req_extensions = v3_req prompt = no [ req_distinguished_name ] countryName = DE stateOrProvinceName = BY localityName = Munich organizationName = SomeCompany organizationalUnitName = SomeUnit commonName = vip.example.com emailAddress = user@example.com [ v3_req ] subjectAltName = @alt_names [alt_names] DNS.1 = vip.example.com IP.1 = 192.0.2.10 DNS.2 = host01.example.com IP.2 = 192.0.2.20 DNS.3 = host02.example.com IP.3 = 192.0.2.30\nTo add additional SAN records, add to the alt_names section and save the file\nCreate the CSR #Execute the following OpenSSL command, which will generate CSR and KEY file [user@server ~]$ openssl req -out /tmp/san_cert/san_cert.csr -newkey rsa:2048 -nodes -keyout /tmp/san_cert/san_cert_private.key -config /tmp/san_cert/san_cert.cnf\nThis will create san_cert.csr and san_cert_private.key in the /tmp/san_cert/ directory. You have to send san_cert.csr to certificate signing authority so they can generate and provide you the certificate with SAN attributes.\nTesting #Verify the CSR #You can verify the CSR has been created with the SAN attributes by running the following command, the output should list DNS and IP entries, if nothing is returned there is a problem with the cnf file. [user@server ~]$ openssl req -noout -text -in /tmp/san_cert/san_cert.csr | grep DNS DNS:vip.example.com, IP Address:192.0.2.10, DNS:host01.example.com, IP Address:192.0.2.20, DNS:host02.example.com, IP Address:192.0.2.30\n","date":"10 January 2018","permalink":"/posts/ssl-certificates-with-san-attributes/","section":"Posts","summary":"This technical article will show you how to create an Certificate Signing Request with SAN attributes.","title":"SSL Certificates with SAN Attributes"},{"content":"Moving to Hugo #I have decided to move my technical blog to Hugo this was primarily down to the fact that I can now put all my posts under Github version control, which allows more powerful management of documents. To enable this migration all articles will have to be converted into Markdown.\nMy current blogging software is Wordpress. Whilst it has served me well, I find it a little overkill for the documents I write and publish. It has quite an overhead using Apache, PHP and MySQL to server static content. Hugo uses a single Hugo binary to parse markdown files and create static HTML files. Apache is then used to serve the static content.\nEach platform has its pros and cons, but for me, reducing overhead and improving version control/management is the deciding factor.\nMoving to markdown will also give me some benefits:\nStandard format for all articles. Simple output conversion to PDF, Word, HTML and other formats. Going forward I will provide a more in-depth details around my solution.\n","date":"9 January 2018","permalink":"/posts/migration-to-hugo/","section":"Posts","summary":"This article describes my reasons for moving toe Hugo from Wordpress for my site.","title":"Migration to Hugo"},{"content":"A common method of stakeholder analysis is a Stakeholder Matrix. This is where stakeholders are plotted against two variables. These variables can be the importance of the stakeholder against their influence.\nMatrix Diagram # Screensot of a Stakeholder Matrix Boxes A, B and C are the key stakeholders of the project. Each box is summarised below:\nBox A #These are stakeholders who have a high degree of influence on the project and who are also of high importance for its success. Good working relationships with these stakeholders must be made.\nBox B #These are stakeholders of high importance to the success of the project, but with low influence. These are stakeholders who might be beneficiaries of a new service, but who have little ‘voice’ in its development.\nBox C #These are stakeholders with high influence, who can therefore affect the project outcomes, but whose interests are not necessarily aligned with the overall goals of the project.\nBox D #The stakeholders in this box, with low influence, or importance to the project objectives, may require limited monitoring or evaluation, but are of low priority.\nHow to Use # Make a list of all stakeholders. Write the name of each stakeholder on a post-it note or index card. Rank the stakeholders on a scale of one to five, according to one of the criteria on the matrix, such as \u0026lsquo;interest in the project outcomes\u0026rsquo; or \u0026lsquo;interest in the subject\u0026rsquo;. Keeping this ranking for one of the criteria, plot the stakeholders against the other criteria of the matrix. This is where using post-it notes or removable cards are useful. Ask the following questions: Are there any surprises? Which stakeholders do we have the most/least contact with? Which stakeholders might we have to make special efforts to ensure engagement? ","date":"5 December 2017","permalink":"/posts/stakeholder-matrix/","section":"Posts","summary":"This article describes what a stakeholder matrix is and how to use it for projects.","title":"Stakeholder Matrix"},{"content":"This document will show you how to set the RHEL7 hostname (Redhat 7 or CentOS 7) based machine. If you have built your new RHEL7 based machine and have now got a bit stuck over how to change the hostname from localhost.localdomain to what ever you want this is the how to for you.\nCheck Installed Kernels #The command below will list all kernels that are currently installed on the system\n[root@server ~]# rpm -q kernel kernel-3.10.0-514.el7.x86_64 kernel-3.10.0-514.6.1.el7.x86_64 kernel-3.10.0-693.5.2.el7.x86_64 kernel-3.10.0-693.11.1.el7.x86_64 kernel-3.10.0-693.11.6.el7.x86_64 The uname command will show which kernel is currently Running\n[root@server ~]# uname -r 3.10.0-693.11.6.el7.x86_64 Remove Old Kernels #Next we will install the yum-utils package which contains the tools we need to limit the number of installed kernels.\nInstall Utilities #[root@server ~]# yum install yum-utils Set Kernels to Keep #Package-cleanup is used to set how many packages will be kept. The command below sets 2 old kernels to be kept.\n[root@server ~]# package-cleanup --oldkernels --count=2 Loaded plugins: fastestmirror --\u0026gt; Running transaction check ---\u0026gt; Package kernel.x86_64 0:3.10.0-514.el7 will be erased ---\u0026gt; Package kernel.x86_64 0:3.10.0-514.6.1.el7 will be erased ---\u0026gt; Package kernel.x86_64 0:3.10.0-693.5.2.el7 will be erased --\u0026gt; Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Removing: kernel x86_64 3.10.0-514.el7 @anaconda 148 M kernel x86_64 3.10.0-514.6.1.el7 @updates 148 M kernel x86_64 3.10.0-693.5.2.el7 @updates 59 M Transaction Summary ================================================================================ Remove 3 Packages Installed size: 355 M Is this ok [y/N]: y Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Erasing : kernel.x86_64 1/3 Erasing : kernel.x86_64 2/3 Erasing : kernel.x86_64 3/3 Verifying : kernel-3.10.0-693.5.2.el7.x86_64 1/3 Verifying : kernel-3.10.0-514.6.1.el7.x86_64 2/3 Verifying : kernel-3.10.0-514.el7.x86_64 3/3 Removed: kernel.x86_64 0:3.10.0-514.el7 kernel.x86_64 0:3.10.0-514.6.1.el7 kernel.x86_64 0:3.10.0-693.5.2.el7 Complete! Kernel Count Check #Next check how many kernels have been left installed, it should be 2\n[root@server ~]# rpm -q kernel kernel-3.10.0-693.11.1.el7.x86_64 kernel-3.10.0-693.11.6.el7.x86_64 Update Installed Kernels Permanently #Next we need to set the number of kernels to stay at two permanently.\nEdit /etc/yum.conf or /etc/dnf/dnf.conf and set installonly_limit:\ninstallonly_limit=2 Thats it, now when ever we update the system, there will only be the last two kernels left on the system.\n","date":"28 November 2017","permalink":"/posts/kernel-cleanup-using-yum/","section":"Posts","summary":"This technical article describes the process to use Yum to cleanup old and unused kernels on a RHEL based system.​","title":"Kernel Cleanup Using YUM"},{"content":"","date":null,"permalink":"/tags/yum/","section":"Tags","summary":"","title":"Yum"},{"content":"","date":null,"permalink":"/tags/lvm/","section":"Tags","summary":"","title":"Lvm"},{"content":"This procedure is how to move data from one Physical Volume to another in a LVM configuration on a RHEL based system.\nAcronyms # Acronym Meaning PV Pysical Volume LV Logical Volume VG Volume Group Highlevel Procedure # Check Current Configuration (Using Multipath/powermt) Check Space on Existing LUNs and VGs Configure/Present LUN to server Scan for LUN on server Add LUN to PV Extend VG to include new LUN Check new LUN and VG have enough space to migrate Migrate data from one PV to new PV Remove old PV from VG Check VG has correct LUNs Detailed Procedure #For the example below, it is assumed 3 LUNs will be used and 1 will be updated/swapped.\nLUN Presentation #Confirm LUN is presented to new server from the storage.\nLUN Rescan #Rescan for presented LUNs. Check how many fibre channels are on the system. [root@server ~]# ls /sys/class/fc_host host0 host1 host2 host3 Perform rescan on each fc port/host. [root@server ~]# echo \u0026#34;1\u0026#34; \u0026gt; /sys/class/fc_host/host0/issue_lip [root@server ~]# echo \u0026#34;- - -\u0026#34; \u0026gt; /sys/class/scsi_host/host0/scan [root@server ~]# echo \u0026#34;1\u0026#34; \u0026gt; /sys/class/fc_host/host1/issue_lip [root@server ~]# echo \u0026#34;- - -\u0026#34; \u0026gt; /sys/class/scsi_host/host1/scan [root@server ~]# echo \u0026#34;1\u0026#34; \u0026gt; /sys/class/fc_host/host2/issue_lip [root@server ~]# echo \u0026#34;- - -\u0026#34; \u0026gt; /sys/class/scsi_host/host2/scan [root@server ~]# echo \u0026#34;1\u0026#34; \u0026gt; /sys/class/fc_host/host3/issue_lip [root@server ~]# echo \u0026#34;- - -\u0026#34; \u0026gt; /sys/class/scsi_host/host3/scan [root@server ~]# cat /proc/scsi/scsi | egrep -i \u0026#39;Host:\u0026#39; | wc -l\nRestart Multipathd (If Used) #Restart Multipathd when scan has been completed. [root@server ~]# service multipathd restart Check Multipath has new routes for newly presented LUNs and both paths are active. [root@server ~]# multipath -ll\nRescan Powerpath (If Used) #Rescan powerpathd [root@server ~]# powermt config Check powerpath has new routes for newly presented LUNs and both paths are active. [root@server ~]# powermt display dev=all\nCheck Current LUNs #Check the current LUNs in the VG. [root@server ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpatha vg_test01 lvm2 a-- 1023.00m 0 /dev/mapper/mpathb vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathc vg_test01 lvm2 a-- 1020.00m 0\nAdd New Luns #If the scan was successful add the new LUNs. [root@server ~]# pvcreate /dev/mapper/mpathd\nCheck Current LUNs #As you can see below it is added, it is not currently assigned to a VG. [root@server ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpatha vg_test01 lvm2 a-- 1023.00m 0 /dev/mapper/mpathb vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathc vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathd lvm2 --- 1020.00m 1023.00m\nAdd PV to VG #Add newly added PV to VG. [root@server ~]# vgextend vg_name /dev/mapper/mpathd\nCheck VG #Make sure you can see the PV As you can see below it is added, and now it is assigned to a VG. Also you can see the new lun /dev/mapper/mpathd has free space. [root@server ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpatha vg_test01 lvm2 a-- 1023.00m 0 /dev/mapper/mpathb vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathc vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathd vg_test01 lvm2 a-- 1020.00m 1023.00m\nMigrate Data #Now to move the data from an old LUN to the new LUN [root@server ~]# pvmove /dev/mapper/mpatha /dev/mapper/mpathd /dev/mapper/mpatha: Moved: 0.39% /dev/mapper/mpatha: Moved: 38.04% /dev/mapper/mpatha: Moved: 75.69% /dev/mapper/mpatha: Moved: 100.00%\nCheck VG #Check the data has moved. You can now see the old lun /dev/mapper/mpatha is now the one with the free space and /dev/mapper/mpathd isn\u0026rsquo;t 100% free. [root@server ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpatha vg_test01 lvm2 a-- 1020.00m 1020.00m /dev/mapper/mpathb vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathc vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathd vg_test01 lvm2 a-- 1020.00m 0\nRemove Old LUN #Now remove the old lun from the VG. Make sure this is the LUN with 100% free. [root@server ~]# vgreduce vg_test01 /dev/mapper/mpatha\nCheck VG #Check the old LUN has been disassociated from the VG. [root@server ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpatha lvm2 --- 1023.00m 1023.00m /dev/mapper/mpathb vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathc vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathd vg_test01 lvm2 a-- 1020.00m 0\nRemove Old PV #Remove the old LUN [root@server ~]# pvremove /dev/mapper/mpatha\nCheck VG #Check the old LUN /dev/mapper/mpatha has been removed [root@server ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpathb vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathc vg_test01 lvm2 a-- 1020.00m 0 /dev/mapper/mpathd vg_test01 lvm2 a-- 1020.00m 0\n","date":"17 October 2017","permalink":"/posts/lvm-migration/","section":"Posts","summary":"This technical article covers how to migrate data from one LVM to a new LVM on RHEL based systems.​","title":"LVM Migration"},{"content":"The Most Common OpenSSL Commands #OpenSSL is one of the most versatile SSL tool. It is an open source implementation of the SSL protocol. OpenSSL is usually used to create a CSR (Certificate Signing Request) and Private Keys. It also has a lot of different functions that allow you to view the details of a CSR, Key or Certificate and convert the certificate to different formats.\nListed below are the most common OpenSSL commands and their usage:\nGeneral OpenSSL Commands #These commands enable generation of Private Keys, CSRs and Certificates.\nGenerate a new Private Key and Certificate Signing Request #[root@server ~]# openssl req -out csr.csr -new -newkey rsa:2048 -nodes -keyout privatekey.key Generate a self-signed certificate #[root@server ~]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privatekey.key -out certificate.crt Generate a certificate signing request (CSR) for an existing private key #[root@server ~]# openssl req -out csr.csr -key privatekey.key -new Generate a certificate signing request based on an existing certificate #[root@server ~]# openssl x509 -x509toreq -in certificate.crt -out csr.csr -signkey privatekey.key Remove a passphrase from a private key #[root@server ~]# openssl rsa -in privatekey.pem -out newprivatekey.pem Checking Using OpenSSL #These commands enable checking of information within a Private Key, CSR or Certificate.\nCheck a Certificate Signing Request (CSR) #[root@server ~]# openssl req -text -noout -verify -in csr.csr Check a private key #[root@server ~]# openssl rsa -in privatekey.key -check Check a certificate #[root@server ~]# openssl x509 -in certificate.crt -text -noout Check a PKCS#12 file (.pfx or .p12) #[root@server ~]# openssl pkcs12 -info -in keystore.p12 Debugging Using OpenSSL #These commands enable debugging of Private Keys, CSRs and Certificates.\nCheck the MD5 hash of a Public Key to ensure it matches the contents of the CSR or Private Key #[root@server ~]# openssl x509 -noout -modulus -in certificate.crt | openssl md5 openssl rsa -noout -modulus -in privatekey.key | openssl md5 openssl req -noout -modulus -in csr.csr | openssl md5 Check an SSL connection. All the Certificates (including Intermediates) should be displayed #[root@server ~]# openssl s_client -connect www.google.com:443 Converting Using OpenSSL #These commands allow you to convert Keys and Certificates to different formats to make them compatible with specific types of servers or software. For example, you can convert a normal PEM file that would work with Apache to a PFX (PKCS#12) file and use it with Tomcat or IIS.\nConvert a DER file (.crt .cer .der) to PEM #[root@server ~]# openssl x509 -inform der -in certificate.cer -out certificate.pem Convert a PEM file to DER #[root@server ~]# openssl x509 -outform der -in certificate.pem -out certificate.der Convert a PKCS#12 file (.pfx .p12) containing a Private Key and Certificates to PEM #[root@server ~]# openssl pkcs12 -in keystore.pfx -out keystore.pem -nodes You can add -nocerts to only output the private key or add -nokeys to only output the certificates.\nConvert a PEM Certificate file and a Private Key to PKCS#12 (.pfx .p12) #[root@server ~]# openssl pkcs12 -export -out certificate.pfx -inkey privatekey.key -in certificate.crt -certfile cacert.crt ","date":"31 December 2015","permalink":"/posts/common-openssl-commands/","section":"Posts","summary":"This technical article looks a some useful OpenSSL commands for working with certificates.","title":"Common OpenSSL Commands"},{"content":"Disable RHEL7 IPv6 #This how-to will show you how to disable RHEL7 IPv6. IPv6 is enabled by default on a standard install of RHEL 7. It is pretty much the same method to disable IPv6 on Redhat 7 as it is Redhat 6. Below are the details on how to do this.\nCheck IPv6 #Check that IPv6 is actually configured.\n[root@server ~]# ip addr 1: lo: \u0026amp;lt;LOOPBACK,UP,LOWER_UP\u0026amp;gt; mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: \u0026amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP\u0026amp;gt; mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:19:5e:64:03:09 brd ff:ff:ff:ff:ff:ff inet 192.168.0.2/24 brd 192.168.0.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::115:8dff:fd64:409/64 scope link valid_lft forever preferred_lft forever Disable IPv6 #To disable IPv6 run the following commands.\n[root@server ~]# sysctl -w net.ipv6.conf.all.disable_ipv6=1 net.ipv6.conf.all.disable_ipv6 = 1 [root@server ~]# sysctl -w net.ipv6.conf.default.disable_ipv6=1 net.ipv6.conf.default.disable_ipv6 = 1 Check IPv6 is Disabled #Check that IPv6 is actually configured.\n[root@server ~]# ip addr 1: lo: \u0026amp;lt;LOOPBACK,UP,LOWER_UP\u0026amp;gt; mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: eth0: \u0026amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP\u0026amp;gt; mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:19:5e:64:03:09 brd ff:ff:ff:ff:ff:ff inet 192.168.0.2/24 brd 192.168.0.255 scope global eth0 valid_lft forever preferred_lft forever Disable IPv6 Permanently #To disable IPv6 permanently you need to add the commands to the networking config.\nEdit the following file\n[root@server ~]# vi /etc/sysctl.conf add the following content to the end of the file\n# Disable IPv6 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 That should be it, IPv6 will now be disabled at boot time on your Redhat 7 \u0026amp; CentOS 7 system.\n","date":"28 July 2014","permalink":"/posts/disable-ipv6-rhel7/","section":"Posts","summary":"This technical post goes through the steps needed to disable IPv6 on a Redhat 7 Based System","title":"Disable IPv6 RHEL7"},{"content":"","date":null,"permalink":"/tags/networking/","section":"Tags","summary":"","title":"Networking"},{"content":"Overview #This HowTo will provide some information about the different types of hostnames and how to set them in RHEL7 (Redhat 7 or CentOS 7) based machine. If you have built your new RHEL7 based machine and have now got a bit stuck over how to change the hostname from localhost.localdomain to what ever you want this is the how to for you.\nTypes Of Hostnames #There are three types of hostnames: Static, Pretty and Transient.\nStatic Hostname #The Static hostname is essentially the traditional hostname which is stored in the \u0026ldquo;/etc/hostname\u0026rdquo; file.\n[user@server ~]$ cat /etc/hostname server.example.com Transient Hostname #The Transient hostname is a dynamic hostname which is maintained at a kernel level. It is initialized by the static hostname, but can be changed by DHCP and other network services.\nPretty Hostname #The Pretty hostname is a free form hostname for presentation to the user.\nSet The Hostname #The hostname can be changed by editing the \u0026ldquo;/etc/hostname\u0026rdquo; file or with the hostnamectl command.\n[root@server ~]# hostnamectl set-hostname server-test.example.com This command will set all three hostnames at the same time, but all three can be set individually using the \u0026ldquo;-static\u0026rdquo;, \u0026ldquo;-transient\u0026rdquo; or \u0026ldquo;-pretty\u0026rdquo; flags.\nValidate \u0026ldquo;/etc/hostname\u0026rdquo; has been updated\n[root@server ~]# cat /etc/hostname server-test.example.com Hostname Information #You can see a bit of information with the hostnamectl command which is very useful. man hostnamectl\n[root@server ~]# hostnamectl Static hostname: server.domain.tld Icon name: computer-vm Chassis: vm Machine ID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Boot ID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Virtualization: vmware Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-123.4.4.el7.x86_64 Architecture: x86_64 ","date":"13 July 2014","permalink":"/posts/set-hostname-rhel7/","section":"Posts","summary":"This technical article walks through how to set the hostname of a server running Redhat 7 OS.","title":"Set RHEL7 Hostname"},{"content":"","date":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories"}]