-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.json
1 lines (1 loc) · 409 KB
/
index.json
1
[{"categories":["Projects"],"content":"The goal of this work is to create an affordable, energy efficient and portable mini-supercomputer. Ideally, a cluster computer with little or no carbon footprint, individual elements that are inexpensive to replace, and a portable system that can be easily disassembled and reassembled. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:0:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Introduction Raspberry Pis are revolutionizing the computer industry. Originally developed to provide low-cost computers to schools, they are expanding far beyond their intended use. This inexpensive technology can be used to accomplish previously unexplored tasks. One such technology is a cluster computer that can run parallel jobs. Many systems built for parallel computing tasks are either expensive or unavailable outside of academia. Supercomputers are expensive to purchase as well as to use, power, and maintain. Although average desktop computers have come down in price, the cost can still become quite high if they require a larger amount of computing power. In today's world of information technology (IT), it is not new to be confronted daily with new technologies such as cloud computing, cluster computing or high-performance computing. All these terms are approaches that are intended to simplify people's work and lives. Cloud providers such as Microsoft or Amazon make these types of technologies available to customers in the form of services for a fee. Users thus have the opportunity to use the computing power of these technologies without having to buy them at a high price. The advantage here is the almost unlimited scaling of resources. Distributed computing, an infrastructure technology and basis for the provision of cluster computers via the Internet, is used to meet increasing resource demands. By interconnecting and adding remote systems, computing power and performance can be dynamically scaled and provided in theoretically unlimited quantities. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:1:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Motivation and objective of the work Unlimited computing power implies the solution approach to be able to solve seemingly unsolvable and complex problems, such as the simulation of black holes or the calculation of the Milky Way. Expensive supercomputers are oversized when it comes to testing new applications for high-performance computers or solving complex problems. In this work, we address the question of whether we can create an independent and comparable but also less expensive system with few resources, which allows us to solve complex problems in the same way. 24 years ago, Donald Becker and Thomas Sterling started the Beowulf project to create a low-cost alternative but also a powerful alternative to supercomputers. Based on this example, the idea has arisen to pursue the same principle on a smaller scale. The goal of this work is to create an affordable, energy efficient and portable mini-supercomputer. Ideally, a cluster computer with little or no carbon footprint, individual elements that are inexpensive to replace, and a portable system that can be easily disassembled and reassembled. A Raspberry Pi is ideal for this purpose because of its low price, low power consumption, and small size. At the same time, it still offers decent performance, especially when you consider the computing power offered per watt. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:2:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Procedure This paper is divided into 5 parts. The first part is dedicated to the terminological clarification of background information on all technologies involved and how they are related to the cluster. Based on this, the second part clarifies the conceptualization, design, and requirements placed on the system. The main part deals with the construction of the Raspberry Pi cluster and thus forms the practical component, in which design decisions are illustrated and the construction is explained. Following this, important factors such as scalability, performance, cost and energy efficiency are discussed and evaluated. Different use cases are addressed and the technical possibilities are considered. Finally, evaluated evaluations are summarized, limitations are pointed out and future extensions are presented. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:3:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Background This chapter covers the basic technologies that form the basis of parallel and distributed systems. These technologies build on each other, as in a layer system, and are dependent on each other. Architectures and techniques based on this, such as virtualization or cluster computing, in turn provide the basic framework for container virtualization and its management systems. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:4:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Parallel and distributed systems Until computers evolved into distributed computing systems, there have been fundamental changes in various computing technologies over the past decades, which we will briefly discuss in order to make the basic framework of parallel and distributed systems more understandable. Amdahl's and Gustafson's law At the beginning of this work, the theoretically unlimited increase of computing power was mentioned. Amdahl's law (named in 1967 after Gene Amdahl) deals exactly with this question, namely whether an unlimited increase in speed can be achieved with an increasing number of processors. It describes, how the parallelization of a software affects the acceleration of this. One divides thereby the software into not parallel, thus sequentially executable and parallel executable portions. Sequential parts are process initializations, communication and the memory management. These parts are necessary for the synchronization of the parallel parts. They form dependencies among themselves and are therefore not able to be executed in parallel. Parallel executable parts are the processors, which are used for computation. It is very important to estimate how much performance gain is achieved by adding a certain number of processing units in parallel working system. Sometimes it happens that adding a larger number of computing units does not necessarily improve the performance, because the expected performance tends to decrease or oversaturate if we blindly add more computing resources. Therefore, it is very important to have a rough estimate of the optimized number of resources to use. Suppose a hypothetical system has only one processor with a normalized runtime of $1$. Now we consider how much time the program needs in the parallelizable portion and denote this portion by $P$. The runtime of the sequential part is thus $(1 - P)$. The runtime of the sequential part does not change, but the parallelizable part is optimally distributed to all processors and therefore runs N times as fast. This results in the following runtime formula: 1 $$\\underset{\\text{sequential}}{\\overset{(1 - P)}{︸}} + \\underset{\\text{parallel}}{\\overset{\\frac{P}{N}}{︸}}$$ This is where the time gain comes from: $$Time\\ gain\\ according\\ to\\ Amdahl$$ $$= \\ \\frac{original\\ duration}{\\text{new\\ duration}}$$ $$= \\frac{1}{(1 - P) + \\frac{P}{N}}$$ Here N is the number of processors and P the runtime portion of the parallelizable program. Gene Amdahl called the time or also speed gain Speedup. With the help of a massive parallelism the time of the parallelizable portion can be reduced arbitrarily, the sequential portion remains unaffected thereby however. As the number of processors increases, the communication overhead between the processors also increases, so that above a certain number, they tend to be busy communicating rather than processing the problem. This reduces the performance and refers to this as an overhead in the task distribution. A program cannot be completely parallelized, so that all processors are always busy with work at the same time. No matter how many processor units are used and the proportion of applications that cannot be parallelized is, for example, one percent, the speedup can be a maximum of 100. Gene Amdahl thus concluded that it makes no sense to keep increasing the computing units in order to generate unlimited computing power. 2 Amdahl's model remains valid until the total number of computing operations remains the same while the number of computing units continues to increase. However, if in our hypothetical simulation the job size increases while the number of computational units continues to increase, then Gustafson's law must be invoked. John Gustafson established this law in 1988, which states that as long as the problem being addressed is large enough it can be efficiently parallelized. Unlike Amdahl, Gustafson shows that massive parallelization is nevertheless worthwhile. A parallel system cannot become","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:4:1","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Cluster A cluster is a computer network and usually refers to a group of at least 2-n servers, also called nodes. All nodes are connected to each other via a local area network (LAN) and form a logical unit of a supercomputer. Gregory Pfister defines a cluster as follows: 22 \"A cluster is a type of parallel system that consists of interconnected whole computers and is used as a single, unified resource. “ 23 Clusters are the approach to achieving high performance, high reliability, or high throughput by using a collection of interconnected computer systems. Depending on the type of setup, either all nodes are active at the same time to increase computing power, or at least 1-n node is passive so that it can stand in for a failed node in case of failure. For data transmission, all servers are connected to each other via at least two network connections to a network switch. Two network connections are typical, since on the one hand the switch is excluded as SPOF, and on the other hand to be able to realize a higher data transfer. 24 With reference to the current Top 500 list of the world's fastest computers, the term supercomputer is absolutely appropriate. It is clear that several cluster systems are among the ten fastest computers in the world. Computer clusters are used for three different tasks: providing high availability, high performance computing and load balancing. 25 Shared and distributed storage According to the von Neumann architecture, a computer has a shared memory that contains both computer program instructions and data. Parallel computers are divided into two variants in this respect, systems with shared or distributed memory. Regardless of which variant is used, all processors must always be able to exchange data and instructions with each other. The memory access takes place via a so-called interconnect, a connection network for the transfer of data and commands. The interconnect in clusters is an important component for exchanging and communicating data between management and load distribution processes. 26 In systems with shared memory, all processors share a memory. The memory is divided into fixed memory modules that form a uniform address space that can be accessed by all processors. The von Neumann bottleneck comes into play here, which means that the interconnect, in this case the data and instruction bus, becomes the bottleneck between memory and processor. Due to the sequential or step-by-step processing of program instructions, only as many actions can be performed as the bus is capable of. As soon as the speed of the bus is significantly lower than the speed of the processors, the processors repeatedly have to wait. In practice, the waiting time is circumvented by the use of buffer memories (cache), which is located between the processor and the memory. This ensures that program commands are available to the processor quickly and directly. 27 Source: Own representation based on (Bauke \u0026 Mertens, 2006, p. 22); Cf. (Christl, Riedel, \u0026 Zelend, 2007, p. 5) Figure 7: Parallel computers with shared memory connected via the data and instruction bus. Figure 7 shows the representation of memory (M) and processors (P) connected via the interconnect. Computer systems with distributed memory, on the other hand, have a separate memory for each processor. In this case, a connection network is required. As soon as a shared memory is dispensed with, the number of processors can be increased without any problems. By using a distributed memory, each processor gains the upper hand over its address space, since it is allocated its own local memory. Similar to distributed memory, communication takes place via an interconnect, which in this case is a local network. The advantage of computer systems with distributed memory is the increase in the number of processors, but the disadvantage lies in the disproportion between computing and communication performance, since the transport of data between CPU (Central Processing ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:4:2","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Virtualization Before we delve a little deeper into the chapter Container Management Systems (see Container Management Systems), we will look at the basic technology of virtualization. This foundational technology is necessary for the operation of container technologies like Docker. In computer science, virtualization refers to the creation of a virtual, rather than actual, version of something, such as an operating system, server, storage device, or network resources. Virtualization refers to a technology in which an application or an entire operating system is abstracted from the actual underlying hardware. In connection with container technologies, a well-known type of virtualization, we distinguish here between two techniques of virtualization, hypervisor-based and container-based virtualization. 39 Hypervisor-based virtualization A key application of virtualization technology is server virtualization, which uses a software layer called a hypervisor to emulate hardware such as memory, CPU, and networking. The guest OS, which normally interacts with real hardware, implements this with a software emulation of that hardware, and often the guest OS has no idea it is running on virtualized hardware. The hypervisor decides which guest OS gets how much memory, processor time and other resources from the host machine. In this case, the host machine runs the host OS and the hypervisor. This means each OS appears to have direct access to the processor and memory, but the hypervisor actually controls the host processor and resources by allocating what is needed by each OS. While the performance of this virtual machine does not match the performance of the OS running on real hardware, the concept of hypervisor-based virtualization works quite well because most OSes and applications do not require full use of the underlying hardware. This allows for greater flexibility, control and isolation by eliminating dependency on a specific hardware platform. Originally intended for server virtualization, the concept of virtualization has expanded to applications, which is implemented using isolated containers. 40 Container-based virtualization Container virtualization or container-based virtualization is a method of virtualizing applications but also entire operating systems. Containers in this sense are isolated partitions that are integrated into the kernel of a host machine. In these isolated partitions, multiple instances of applications can run without the need for an OS. This means that the OS is virtualized, while the containers that run inside the system have processes that have their own identity and are thus isolated from another process in another container. Software running in containers communicates directly with the host kernel and must be executable on the operating system and CPU architecture on which the host is running. By not emulating hardware and booting a complete operating system, containers can be started in a few milliseconds and are more efficient than traditional virtual machines. Container images are typically smaller than virtual machine images because container images do not need to contain device drivers or a core to run an operating system. Because of the compactness of these application containers, they find their predominant use in software development. Developers do not have to set up their development machines by hand; instead, they use pre-built container images. These images are memory maps of entire application structures that can be arbitrarily moved back and forth between different host machines, also called shipping. This is one of the reasons why container-based virtualization has become increasingly popular in recent years. 41 Examples of container platforms are Docker from Docker Inc. and rkt from the developers of the CoreOS operating system, with Docker enjoying increasing popularity in recent years. Compared to virtual machines, Docker represents a simplified solution to virtualization. 42 Figure ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:4:3","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Container Management Systems As soon as several containers are to be distributed and run on a parallel system such as a cluster, it is necessary to use a container management system. Such a system manages, scales and automates the deployment of application containers. Well-known open source systems are Kubernetes from Google or Docker Swarm. Another very well-known container manager worth mentioning is the OpenShift software from Redhat, but we will not discuss it further in this paper. These cluster management tools play an important role as soon as a cluster has to take care of tasks such as load balancing and scaling. In the following, we will introduce the first two container managers mentioned and explain their structure and use with containers. Docker Swarm Docker is an open standards platform for developing, packaging, and running portable distributed applications. With Docker, developers and system administrators can create, deliver, and run applications on any platform, such as a PC, the cloud, or a virtual machine. Obtaining all the necessary dependencies for a software application, including code, runtime libraries, and system tools and libraries, is often a challenge when developing and running an application. Docker simplifies application development and execution by consolidating all the required software for an application, including dependencies, into a single software unit called a Docker image that can run on any platform and environment. The Docker software, based on the Docker image runs in an isolated environment called a Docker container, which contains its own file system and environment variables. Docker containers are isolated from each other and from the underlying operating system. 43 One solution already integrated into Docker is Docker Swarm Mode. Docker Swarm is a cluster manager for Docker containers. Swarm allows administrators and developers to set up and manage a cluster consisting of multiple Docker nodes as a single virtual system. Swarm mode is based on Docker Engine, the layer between the operating system and container images. Clustering is an important feature for container technology because it creates a cooperative group of systems that provides redundancy and enables failover when one or more nodes experience a failure. A Docker Swarm Cluster provides users and developers the ability to add or remove containers as compute requirements change. A user or administrator controls a swarm through a swarm manager, which orchestrates and deploys containers. Figure 15 shows the schematic structure and relationship between instances. 44 Source: Own representation. Figure 15: Structure and communication of the manager and worker instances. The Swarm Manager allows a user to create a primary manager instance and multiple replica instances in case the primary instance fails, similar to an active/passive cluster. In Docker Engine's swarm mode, the user can provision so-called master nodes and worker nodes at runtime. 45 Google Kubernetes The name Kubernetes comes from the Greek and means helmsman or pilot. Kubernetes is known in specialist circles by the acronym K8s. K8s is an acronym where the eight letters \"ubernete\" are replaced by \"8\". Kubernetes provides a platform for scaling, automatically deploying and managing container applications on distributed machines. It is an orchestration tool and supports container tools such as Apache Mesos, Packer, and including Docker. 46 A Kubernetes system consists of master and worker nodes, the same system principle as Docker Engine, with manager instances named master. The smallest unit in a Kubernetes cluster is a pod and runs in the worker nodes. This pod consists of at least one or more containers. A worker node can in turn run multiple pods. A pod is a worker process that shares virtual resources such as network and volume among its containers. 47 Source: Own representation based on (Pods and Nodes, 2018) Figure 16: Granular representation of node, pod ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:4:4","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Raspberry Pi Credit-card-sized single-board computers (SBCs) such as the Raspberry Pi, developed in the UK by the Raspberry Pi Foundation, were originally intended to promote computer science education in schools. Acronyms like \"RPi\" or the abbreviation \"RasPi\" or \"Pi\" for the Raspberry Pi are mostly common here. Like smartphones, single-board computers are equipped with ARM processors (Advanced RISC Machines). Before the development of ARM in 1983, there were mainly CISC and RISC processors on the market. CISC stands for Complex Instruction Set Computer. Processors with this architecture are characterized by extremely complex instruction sets. Processors with RISC (Reduced Instruction Set Computer) architectures, on the other hand, have a restricted instruction set and therefore also operate with low power requirements. The Pi's board is equipped with a system-on-a-chip (SoC, i.e. single-chip system), which has the identifier BCM2837 from Broadcom. The SoC consists of a 1.2 GHz ARM Cortex-A53 Quad Core CPU, a VideoCore IV GPU (Graphics Processing Unit) and 512 MB of RAM. It does not include a built-in hard drive, but uses an SD card for booting and permanent data storage. It has an Ethernet port based on the RJ45 standard for connecting to a network, an HDMI port for connecting to a monitor or TV, USB (Universal Serial Bus) ports for connecting a keyboard and mouse, and a 3.5 mm jack for audio and video output. 50 Source: Own representation based on (Merkert, 2017, p. 12) Figure 17: Illustration of a Raspberry Pi 3 Model B and its components. An average of 35 EUR is the price one pays for a Raspberry Pi, which makes it economically suitable for use and integration into a cluster system, since the unit costs for individual nodes are low. 51 General Purpose Input Output (GPIO) is the name for programmable inputs and outputs for general purposes. A Raspberry Pi has a total of 40 GPIO pins, twelve of which are for power supply and 28 of which serve as an interface to other systems in order to communicate with or control them. GPIO pins 3 and 5 (see Figure 18) enable devices such as an LCD display to be addressed by means of an Inter-Integrated Circuit (I2C), a serial data bus. Source: Own representation based on (Raspberry Pi 3 GPIO Pin Chart with Pi, 2018) Figure 18: Illustration of the different GPIO pins of the Raspberry Pi. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:4:5","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Conception and design In this chapter, we will go into the basic concept of creating a cluster using Raspberry Pi single-board computers. Here, these mini-computers form the basis for constructing a cost-effective and energy-efficient cluster system. Inspired by Joshua Kiepert and his 32-node RPiCluster or Nick Smith and his first design of a 5-node Raspberry Pi cluster, the idea of further developing and improving certain components has emerged, such as increasing the cooling performance by means of an optimized airflow supply, adding and logically arranging further connection possibilities and considering a modular expandability of the cluster. 52 ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:5:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Requirements The main focus in the design of this Raspberry Pi cluster is on the following requirement criteria: Cost Efficiency. Energy efficiency. Scalability. Resilience. Further criteria are a visual and easy-to-read status and information display of current system values, ideal cooling and optimization of the airflow for the best possible removal of heat. Furthermore, the entire design concept is fundamentally based on certain design requirements, which we will discuss in more detail in the design decisions (see Design decisions). Cost and energy efficiency The factors of cost and energy efficiency are paramount and predominantly influence the conception and design. In order to keep the acquisition costs as low as possible but still be able to offer efficient computing power, Raspberry Pi single-board computers in version 3 are to be installed. The use of unnecessary cable lengths or heavy and expensive materials such as sheet steel or aluminum for the housing should be avoided. In addition, a weight and further cost saving is to be achieved by using plastic instead of steel screws. The system should be reproducible at low cost and consume as little power as possible. Energy consumption of less than 60 kilowatts per hour is planned, which is comparatively equivalent to the average power consumption of a commercially available light source such as an incandescent lamp. Such low power consumption implies the portability factor, which means that it should also be possible to use this cluster on a mobile basis. Scaling and resilience Scaling is to be considered in this system in two respects. On the one hand, the user should be given the option of connecting the entire system with other clusters in order to be able to scale cluster-wise at this level. Here we also speak of horizontal scaling. On the other hand, it should be possible to scale vertically or node by node within a cluster by adding or removing individual nodes. This is done either automatically using cluster software or by physically adding or removing further single-board computers. Due to the modular structure of the cluster, the primary goal is to expand individual entire clusters, i.e. vertical scaling. In parallel to scaling, failover is an important player when it comes to keeping the cluster alive. As soon as the cluster system scales on the software side, all peripheral components must be designed and optimized accordingly so as not to reach their physical limits, such as maximum storage capacity or computing power. This also applies to components such as the network distributor and the power supply unit. With the help of organizational measures and the creation of technical redundancies, this fail-safety is to be guaranteed. This is also referred to as system availability. Status and information display For a direct perception of current system values such as host name, system time, processor and case temperature, a visual information display should be available in the form of a display. These important and system-critical values should be immediately and directly readable without the help of technical means, such as a keyboard or the connection of an external monitor. Furthermore, this display should have a backlight to be readable even in dark rooms or with little to no light. Cooling Passive cooling and optimized case ventilation are important points that have to be considered in the design. The advantage of passive cooling should be used to save energy on the one hand and costs on the other. Active cooling is nevertheless necessary to ensure the supply of cold air and the removal of warm air. Likewise, a basic law of physics, the so-called chimney effect, must not be lost sight of during further planning. This states: warm air rises, cold air remains on the ground. Air flow and heat dissipation The processors of the individual nodes and integrated circuit modules (IC) or internal circuit modules of the individual peripheral components develop a certa","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:5:1","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Design decisions The basic characteristics of this Raspberry Pi cluster are minimalism, transparency and simplicity. Minimalism in architecture, be it in building or model construction, is characterized by the reduction to simple cubic forms. The goal is the formation of geometric and pure forms, which is made possible with the help of transparent building materials such as glass. Whereupon we come to the property transparency. On the software side, transparency means that the user of this system knows exactly how the cluster system works, how it scales or what software is used. On the hardware side, it is clear which physical components such as cables, network distributors or connectors are connected to each other or whether systems emit visual signals. Simplicity, on the other hand, stands for easy understanding of the system. It should, at first glance, imply the usefulness of this system, from the point of view of an ignorant as well as affine user. Minimalism and transparency: cube We decide on a compact and portable design in the form of a cube and christen the system with the name PiCube. The Pi, as in Rasperry Pi, stands for Python Interpreter and signals that this system supports the Python programming language or is generally supported by all common operating systems like Raspbian or CoreOS. Python convinces with its minimalistic and easily understandable programming style and also contributes to the overall concept of this system. The English word Cube means cube. Another idea for the naming is PiKube, where Kube stands for the cluster management system used and at the same time comes from Kubus, the ancient Greek kybos or the Latin cubus for cube. However, since the design of the cube case does not have exactly equal faces, we decide to use PiCube, since a cube is a regular hexahedron. A hexahedron has six faces of equal size. Simplicity: Elastic Clip Concept In order to allow easy assembly and reassembly, without the use of additional tools or fastening screws of any kind, the Elastic Clip concept by Patrick Fenner was used. This concept allows to create a slim and elegant design. For this purpose, we use acrylic glass, which is transparent, light and flexible, as the construction material for the housing. The clips give the possibility to connect the single acrylic glass sides of the cube in a 90 degree angle. The real highlight here is the automatic snap-in of the clip connections in the insertion openings provided for this purpose. Removal or dismantling of the cube walls is done by bending the clip, so that the connections can be released. 53 Source: Own representation based on (Laser-Cut Elastic-Clips, 2017) Figure 19: Illustration of clip dimensions with and without force application. $F$ stands for the force applied to the clip. The clip dimensions are: $a$ = 4mm, $b$ = 2m, $d$ = 2mm, $l$ = 25mm Depending on the nature, flexibility and material of the acrylic glass used, the clip may break if too much force is applied. The problem is that the maximum force is applied to the upper right $F$ on the upper right end of the clip, which will break if it is too short. $l$ the clip breaks if it is bent too much or subjected to too much load. To counteract this, there is a simple way to distribute the load on the clip at maximum force by widening the cut. This reduces the risk of the clip breaking. 54 Source: Own representation based on (Laser-Cut Elastic-Clips, 2017) Figure 20: Widening the incision site increases durability. Due to the material nature and flexibility of acrylic glass, the use of this elastic clip concept is most suitable. If other materials such as medium-density fiberboard (MDF) or simple wood are considered, it must be noted here that due to the unidirectional or one-sided wood fibers, much weaker resistance and thus less flexibility is offered to bend the clip elastically. Use in conjunction with MDF is an alternative, but the clip will inevitably break under too much load. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:5:2","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"PiCube 2D model and logo Inkscape is a professional software for editing two-dimensional (2D) vector graphics. With the help of this software we create a 2D graphic of the housing plates. Prior to manufacturing the enclosure, we sketch and produce a two-dimensional template to determine the exact dimensions for connections, fasteners, and required air slots. Likewise, we are able to determine the exact cutting dimensions and positioning for the elastic clips (see Design decisions) to the millimeter. 55 In the following figure, all six sides are shown and marked accordingly. L for left side, R for right side, F for front, T for top, G for bottom, and B for back. The back contains the connectors for HDMI, network and power, as well as the ventilation grilles and mounting holes for the case fan. The front has inlets for the double USB socket and the LCD display. Ventilation grilles are also broadly sketched here. All other sides, except the top, also have mounting holes for the rest of the components like the switch, USB charger and the additional side air intakes. Source: Own representation. Figure 21: 2D model design of the housing including all connections. 3D model and connections To ensure that all the required connections (see Connections) can be correctly attached to the housing, we use the Rhinoceros 3D program to create three-dimensional graphics. A 3D model helps to better visualize and represent the housing to be constructed. Physical components can be inserted, rotated or scaled in size. Due to the exact specification of the dimensions of the individual components, a correspondingly realistic model is created before it is manufactured. The rendered graphics (see Figures 22 and 23) show the overall design with all the individual components installed, marked in different colors for better representation. Blue marks the USB components and light gray the case fan. The color dark gray represents network components and red the HDMI port. Source: Own representation. Figure 22: 3D model design including components with a view of the front of the housing. Source: Own representation. Figure 23: *3D model design including components with view of the back of the housing.*y Components and costs For the construction of this prototype, the individual components are obtained from various suppliers or mail-order companies such as Amazon and eBay. The acrylic glass plates are milled in collaboration with Andreas Gregori from MAD Modelle Architektur Design. For the production of the case we do not use laser cutting, but normal milling. This has the advantage that we avoid traces of smoke or sclerosis, which occur during laser cutting due to high temperatures. Milling has another advantage and that is the possibility of surface milling. Thus, we are able to engrave the PiCube logo (see Figures 21 and 24) into the upper and front cube surface. The five Raspberry Pis and the matching 8 GB SD cards make up the largest part of the costs, amounting to 246.75 EUR. The middle part of the costs, which ranges between 13.75 and 22.99 EUR, is taken up by the Gigabit switch, the USB charger and the USB cables. The rest of the hardware, such as the LCD display, cables and plastic screws, are in the price range of 0.65 to 9.80 EUR. Unit(s) Component Unit price 5 Raspberry Pi 3 Model B 42.70 Euro 5 SanDisk Ultra 8 GB microSDHCUHS -I Class 10 6.65 Euro 1 Edimax ES-5800G V2 Gigabit Switch (8-Port) 22.99 Euro 1 Anear 60 Watt USB Charger (6-Port) 18.99 Euro 5 Micro USB cable (15 cm) 2.75 Euro 5 Transparent power cord (15 cm) 0.79 Euro 2 RJ45 jack (female-male) 2.74 Euro 1 Dual USB 2.0-A female-male connector 4.53 Euro 1 LCD display module 1602 HD44780 with TWI controller 4.45 Euro 1 AC 250V 2.5A IEC320 C7 socket 1.39 Euro 1 C7 power cable 90 degree angled (1 meter) 3.26 Euro 1 Cable jumpers (female-female. 40 wires. 20 cm) 0.65 Euro 56 M3 Nylon Hex Spacer Nuts and Bolts White 0.05 Euro 1 Antec TRICOOL 92mm 4-pin case fan 9.80 Euro 1 Milling cut of the acry","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:5:3","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Construction The practical part of the work, the construction, follows the step-by-step explanation of how to assemble, install and configure the Raspberry Pi cluster. The installation instructions and scripts used in the following are either available from the sources mentioned or from the GitHub repository http://github.com/segraef/PiCube. All installation and configuration steps are easy to follow, so that it is easy to build the cluster on your own. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:6:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Mounting and wiring Elastic clips (see Simplicity) allow the individual case sides to be easily plugged together without the use of screws. The pre-milled holes for the switch, USB charger and LCD display allow the individual components to be attached using the plastic nuts and screws. Pre-milled holes for the LCD display, network ports, HDMI port, and power connector provide the proper slots for the cables to be subsequently connected outside of the case. The assembly is done according to the steps mentioned below: Connecting the single board computers together using hex spacers. Screw on the attached Raspberry Pis on the left inner side. Attachment of the switch to the base plate. Attaching the LCD display and USB sockets to the front panel. Screwing the USB charger to the right side plate. Mounting the case fan, HDMI and network jacks. After the entire interior has been assembled, the side panels are clipped in step by step. To do this, start with the base plate, onto which the left and right side plates are clipped and clipped together. Before assembling the front panel, we wire the individual components. LCD displays only work by being powered and controlled. A total of 4 cable jumpers are used for this purpose. Two of them are for power and the other two are for control and transmission signals which we connect to GPIO pins 2, 3, 5 and 6 of one of the Raspberry Pis which acts as a master node. All mini computers are connected to the switch using RJ45 network cables. Likewise, we connect the cables of the network jacks to the switch, which serve as network port extensions. Micro-USB cables are typically used for data transfer, in this scenario they are only used for powering the individual single-board computers. We connect the network switch and all Raspberry Pis to the USB charger using the USB cables. We connect the case fan with its 3-pin connector to the GPIO ports 2 and 6 of one of the Raspberry Pis to supply it with power. After all the necessary cables and components are connected, we move on to the cluster installation. The following two images show the PiCube in a partially and fully wired state. Source: Own representation. Figure 24: Partially wired PiCube without front and bottom panels. Source: Own representation. Figure 25: Fully wired and closed PiCube. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:6:1","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Installation Operating system To provide a suitable developer system and, to meet all requirements, we use the HypriotOS operating system as the base system for the cluster. The OS provides a Docker-optimized Linux kernel and is therefore ideally suited for this cluster. It already includes all the required modules such as Docker Engine, Docker Machine and Docker Compose. HypriotOS is an operating system specially developed for Raspbery Pi, which is based on the Linux distribution Debian. Important prerequisites for the successful operation of a cluster system are identical and redundantly designed hardware (see Cluster) and the cluster software that can run on it. Among other things, this concerns the same software and driver versions for all participating nodes. To ensure that the cluster system maintains a homogeneous operating system and version structure on all nodes, we use operating system images. 56 Imaging and provisioning For the installation of Raspberry Pi operating systems, memory images are used. Images are disk images in a compressed file that contains files, file system structures, and boot sectors. Simply put, an image contains an exact disk copy of an operating system. The advantage of using images is that an operating system does not have to be installed. In order for this exact disk copy to be deployed in our nodes, we copy the contents to SD cards and deploy them in our cluster nodes. Using the Hypriot flash tool, the hostname is passed to the image and is thus hardcoded for the first boot of the Raspberry Pi. Hardcoded means that values, such as the hostname in this case, are passed into the startup configuration of the operating system and are called and used when the system starts. The following code snippet (see Listing 1) shows the command that sets the hostname rpi1 for the image package hypriotos-rpi1-v1.6.0.img.zip after downloading the HypriotOS image from the address https://github.com/hypriot/image-builder-rpi/releases/download/v1.6.0/. 57 flash \\--hostname rpi1 https://github.com/hypriot/image-builder-rpi/releases/download/v1.6.0/hypriotos-rpi-v1.6.0.img.zip Listing 1: The HypriotOS image is downloaded and copied to the SD card using flash. YAML Ain't Markup Language (YAML) is a markup language and gives us the possibility to create customized parameters for a startup configuration. Thus, it is very easy to pass values into a configuration which will be initialized and applied when each node is started for the first time. Examples for start parameters, which can be preconfigured: Hostname WLAN -SSID WLAN password Static or dynamic IPaddress The YAML files for our nodes look like this, with the hostname for the respective Raspberry Pi adjusted accordingly: hostname: \"rpi\" wifi: interfaces: wlan0: ssid: \"WLAN\" password: \"2345251344834395\" Listing 2: Example of a YAML configuration. Fixed WLAN parameters lets individual nodes connect to an existing wireless network. We copy this configuration to the image in the /boot/ directory. The following code snippet shows setting the YAML configuration using flash. This accesses /boot/device-init.yaml on initial startup and copies the appropriate parameters to the operating system's startup configuration: flash \\--config https://github.com/hypriot/image-builder-rpi/releases/download/v1.6.0/hypriotos-rpi-v1.6.0.img.zip Listing 3: The HypriotOS image is downloaded, copied to the SD card using flash and a given YAML configuration. Time is precious and, in order to save it, we prepare the respective image for each individual Raspberry Pi and thus take advantage of the automatic provisioning. Due to the use of a total of 5 cluster nodes, the manual installation, configuration and maintenance of each individual node would be very time-consuming. Therefore, we provision every single operating system in advance, which means we use a master image that already contains current driver versions and software packages. We modify this system image for each individual node","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:6:2","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Cluster configuration Installing and configuring Kubernetes All five nodes are now configured in the same way as described below, the order is not decisive here. The connection to the individual Raspberry Pis is established via Secure Shell (SSH). In order to establish this secure connection over the network, the terminal program putty is used. As soon as the connection is established, a \"sudo update\" is performed to update the current package installation lists to the latest version. Afterwards, \"sudo upgrade\" is used to update all the correspondingly required software packages for Docker. The next step is to install Kubernetes using the following command: curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - \u0026\u0026 echo \"deb http://apt.kubernetes.io/ kubernetes-xenial main\" | sudo tee /etc/apt/sources.list.d/kubernetes.list \u0026\u0026 sudo apt-get update -q \u0026\u0026 sudo apt-get install -qy kubeadm Listing 4: Command to install Kubernetes. Here, the corresponding Kubernetes installation package is downloaded and installed. Now the node rpi1 is selected and the command sudo kubedm init is executed to initialize the cluster. Thus, the cluster is created and rpi1 is set as the master node. All other hosts are added to the cluster as worker nodes using the kubeadm join command. After the last node is successfully added, the command kubectl get nodes is used to check whether all nodes are ready and added to the computer cluster (see Figure 27). Source: Own representation. Figure 27: Status query in the terminal of all nodes. The Kubernetes cluster now has an active master node. If this node ever fails, the kube-controller-manager (see Google Kubernetes) detects this and decides which of the worker nodes will step in as the new active master. Source: Own representation. Figure 28: Schematic structure of Kubernetes on the PiCube cluster. Configuration LCD display In order for the LCD display to show the corresponding values IP addresses, system time, temperature and status information, a corresponding Python script is used. This script is executed automatically at every startup and delivers status values to the LCD display via I2C. 58 Source: Own representation. Figure 29a: Display of temperature, voltage and load on the LCD display. Source: Own representation. Figure 29b: Display of temperature, voltage and load on the LCD display. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:6:3","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Evaluation After the cluster has been configured and Kubernetes is ready for use, the entire system is evaluated. The requirements set in advance (see Requirements) are critically reviewed, target and actual values are evaluated, and various comparisons are made. This prototype is ideally suited to illustrate and substantiate the research question addressed in this thesis. Section Use case SETI@home deals with a use case that provides information about the extent to which and the purpose for which this custom cluster construction can contribute to scientific research. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:7:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Scalability Kubernetes enables easy scaling of applications or pods and the applications within them. Enabled autoscaling scales pods according to their required resources up to a specified limit. In the following example, we test the autoscaling of our cluster using a simple web application and verify the automatic scalability of the system: 59 For this purpose, a Docker image based on a simple web server application is used. This Docker image contains a single web page, which causes a maximum CPU load by simulated users when called. The Docker image is launched, causing a web server to run with the corresponding web page as a pod. Kubernetes' autoscaler is enabled with the following configuration: 60 KUBE_AUTOSCALER_MIN_NODES = 1: The minimum number of nodes to be used is one. KUBE_AUTOSCALER_MAX_NODES = 4: The maximum number of nodes to be used is four. The fifth node thus retains the function as master. KUBE_ENABLE_CLUSTER_AUTOSCALER = true: Activation of the autoscaler. The pod with the web server container is started with the following properties: CPU-PERCENT = 50: This configuration value specifies the value that is maintained to keep all pods at an average CPU load of 50 percent. As soon as a pod requires more than half of its available computing power, another pod instance is automatically replicated or an exact copy of the running pod is created. MIN = 1: At least one Pod is used for scaling. MAX = 10: A maximum of ten replicas of a pod can be used for scaling. After the pod is started and user load is simulated, an increase in CPU load to 250 percent is observed and that seven pods have already been swapped out or scaled: $ kubectl get hpa NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE php-apache deployment/php-apache/scale 50% 250% 1 10 2m $ kubectl get deployment php-apache NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE php-apache 7 7 3 4m $ kubectl get pods NAME RUNNING STATUS AVAILABLE AGE php-apache-3046963998-3ewo6 0/1 Pending 0 1m php-apache-3046963998-8m03k 1/1 Running 0 1m php-apache-3046963998-ddpgp 1/1 Running 0 5m php-apache-3046963998-lrik6 1/1 Running 0 1m php-apache-3046963998-nj465 0/1 Pending 0 1m php-apache-3046963998-tmwg1 1/1 Running 0 1m php-apache-3046963998-xkbw1 0/1 Pending 0 1m Listing 5: Observing CPU utilization increase to 250% while pods are swapped out to provide resources. Two minutes after the user load simulation stops, the CPU utilization drops back to zero and the drop from seven to one pod can be seen. $ kubectl get hpa NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE php-apache deployment/php-apache/scale 50% 0% 1 10 7m $ kubectl get deployment php-apache NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE php-apache 1 1 1 9m $ kubectl get pods NAME RUNNING STATUS AVAILABLE AGE php-apache-3046963998-ddpgp 1/1 Running 0 9m Listing 6: Watching CPU utilization drop to zero percent and drop to one pod. As can be seen (see Listing 1), it is very easy to dynamically adjust the number of pods to the load, by enabling the cluster autoscaler. Automatic and dynamic scaling can also be very helpful when there are irregularities in cluster utilization. For example, development-related clusters or continuous computing operations can be run on weekends or at night. Compute-intensive applications can be better scheduled so that a cluster is optimally utilized. In all cases, the cluster can be used optimally. Either reduce the number of unused nodes to save energy or scale to the limit to provide enough computing power. Depending on which case occurs, a dynamically scaling cluster ensures that at high or low utilization, all tasks are solved in the most efficient way. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:7:1","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Fail-safe As with each element of our cluster, we ensure that more than one instance of each component is running simultaneously. Setting the number to five physical nodes per enclosure is one reason for creating the possibility of optimal scaling and appropriate resilience. The greater the number of nodes, the less likely there is to be a total failure or bottleneck in scaling capabilities. Load balancing and availability are closely related to cluster resilience. One way to ensure that a master node is highly available is to allow a worker node to step in as master. Kubernetes already inherently brings the function that as soon as a master node fails, a worker takes its place. The active/passive design is already in use. Second, this failover implementation is active for all worker nodes. Figure 28 shows how load balancing and failover work together to keep this cluster alive in the event of a master as well as worker node failure. The actors here are the cluster components etcd, kube-apiserver, kube-controller and kube-scheduler (see Google Kubernetes). The Kubernetes API server, controller, and scheduler all run inside Kubernetes as pods. This means that in the event of a failure, each of these pods will be moved to a different node, thus preserving the core services of the cluster. Here, one considers potential failure scenarios: Loss of the master node: If not configured for HA, loss of the master node or its services will have a severe impact on the application. The cluster will not be able to respond to commands or deploy nodes. Each service in the master node is critical and is configured appropriately for HA, so a worker node automatically steps in as the master. Loss of worker nodes: Kubernetes is able to automatically detect and repair pods. Depending on how the services are balanced, there may be an impact on the end users of the application. If any pods on a node are not responding, kubelet detects this and informs the master to use a different pod. Network failure: The master and worker nodes in a Kubernetes cluster can become unreachable due to network failures. In some cases, they are treated as node failures. In this case, other nodes are used accordingly to replace the respective node that is unreachable. Source: Own representation. Figure 30: Failure and takeoverscenario of the master node rpi1 and the worker node rpi3. Kubernetes is configured to be highly available to tolerate the failure of one or more master nodes and up to four worker nodes. This is critical for running development or production environments in Kubernetes. An odd number of nodes is chosen so that it is possible to keep the cluster alive even with only one node. A cluster node can continue to operate in the last instance once a node is only acting as a master and worker. In this state, the cluster is neither highly available, fail-safe, nor completely resilient, but it survives. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:7:2","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Cost efficiency and performance Looking at the costs, there are two sides to the story. With the total acquisition costs, minus the contributed effort to assemble, install and configure the cluster, the value of the investment is exactly 358.79 EUR. This construct is contrasted with a simple Linux cluster for comparison. This Linux cluster consists of commercially available PCs with comparable core data to that of the Raspberry Pi nodes. Raspberry Pi 3 Model B Linux PC CPU Cortex-A53 1.2 GHz Quadcore Intel Celeron J1900 2 GHz Quadcore RAM 1024 MB 4 GB RAM Network 100 Mbps 1000 Mbps Current Consumption max. 4 Watt / h max. 10 Watt / h Price per computer approx. 35 € approx. 95 € Total price approx. 175 € approx. 475 € Table 2: Cost comparison of the core components of Raspberry Pi and Linux PC. If we now compare the core data, we can see that the comparison system is definitely associated with higher acquisition costs. It is important to mention that this comparison primarily focuses on the costs and not the performance of the individual systems. It is obvious that a Linux PC, based on a CISC processor architecture, definitely achieves higher FLOPS than an ordinary ARM processor. Nevertheless, it becomes clear in the first approach that, despite the lower performance, there is a significant difference in terms of cost. Especially when performance per watt is calculated. Due to the compact design of the Raspberry Pi and the accommodation of all components, such as the integrated power supply and GPU, it is a competitive partner for the Linux PC. In relation to the costs, the question arises how the comparison systems perform in terms of performance. For this, a performance test is performed with the help of sysbench. Sysbench is a benchmark application that quickly gives an impression of the system performance. For this purpose, a CPU benchmark is run on both systems, which calculates all prime numbers up to 20000, and the results are shown in Table 3 and Figure 31. Raspberry Pi 3 Model B Linux PC CPU benchmark Prime number calulation Prime number calculation Threads (process parts) 4 4 Limit 20000 20000 Calculation time 115.1536 seconds 11.2800 seconds Table 3: Comparison times of the prime number calculation up to 20000. Source: Own representation. Figure 31: Results of the sysbench benchmark run on node rpi2. The difference in the calculation time is clearly visible. There is a difference of 104 seconds. According to the visible comparisons and as already mentioned in the cost comparison, there is no question that a Linux PC based on the CISC architecture has a higher CPU performance than the Raspberry Pi with an ARM architecture. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:7:3","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Energy efficiency and cooling ARM processors, such as those installed on the Raspberry Pi, have a high energy efficiency with a clock frequency of 1.2 GHz and a power consumption of max. 4 watts. The average power consumption is about 2 watts in idle mode. The switch consumes 2.4 watts at 0.2 amps of current and 12 volts. The total wattage output of the USB charger is made up of all components and sits at the end of the power chain. Summarizing with all installed components, you get a total power consumption of 13.75 watts in idle and 34 watts at maximum load (see Table 4). Power consumption Component Idle maximum Raspberry Pi 3 Model B 2 Watt 4 Watt Edimax ES-5800G V2 Gigabit Switch (8-Port) 2.4 watt 2.4 watt LCD display module 1602 HD44780 with TWI controller 0.1 watt 0.1 watt Antec TRICOOL 92mm 4-pin case fan 1.25 watt 1.25 watt Anear 60 Watt USB Charger (6-Port) - - Total power consumption 13,75 Watt 23,75 Watt Table 4: Total power consumption of the PiCube in idle mode and maximum CPU load of 100%. In the following test, CPU load is generated using the Sysbench prime calculation program and advantages and disadvantages are shown by using active cooling and passive cooling elements. We read system values such as temperature, clock frequency and voltage using the following commands in each case: 61 vcgencmd measure_temp vcgencmd scaling_cur_freq vcgencmd measure_volts core Listing 7: Commands for querying temperature, clock frequency and voltage. In the following, we look at three temperature curves in the case. The CPU clock frequency is 1.2 GHz and the CPU voltage is 1.325 volts over a period of 5 minutes: Temp1: In case, without heatsink on SoC, without active cooling. Temp2: In case, with heatsink on SoC, without active cooling. Temp3: In case, with heatsink on SoC, with active cooling. **CPU utilization (%) Temp1 (°C) Temp2 (°C) Temp3 (°C) 0 39 39 44 100 77 77 82 Table 5: Measured values of heat generation without active cooling. Next, we look at three temperature profiles of a Raspberry Pi processor. The CPU clock frequency is 1.2 GHz and the CPU voltage is 1.325 volts over a period of 10 minutes: Temp1: CPU, without heat sink on SoC, without active cooling. Temp2: CPU, with heat sink on SoC, without active cooling. Temp3: CPU, with heat sink on SoC, with active cooling. **CPU utilization (%) Temp1 (°C) Temp2 (°C) Temp3 (°C) 0 44 32,2 27,8 100 83,3 83,3 69,8 Table 6: Measured values The heat development of the circuit boards of each individual computer is also taken into account. Although this is low, it increases constantly with the number of nodes installed in the case. The heat development is about 35 degrees Celsius with an average load of a single board. With 5 nodes, this is already around 38 degrees, which corresponds to a factor of around 1.08 per node. If all 5 nodes are overclocked by increasing the processor's clock frequency, this factor increases to 1.1. Temperature differences of 10 degrees in the case and the processor prove that the maximum performance of all hardware nodes cannot be exploited without appropriate cooling. Passive heat sinks and an already implemented active cooling with the help of a case fan can help here. There is no question that the optimized airflow inside the case also contributes to the improved cooling performance. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:7:4","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Use Case SETI@home After evaluating the cluster, we turn to a use case from the scientific domain. A current PRC project of BOINC is SETI@home, a scientific experiment run by the University of California at Berkeley that uses computers connected to the Internet in the search for extraterrestrial intelligence. One participates by running a free client program on a computer that downloads and analyzes radio telescope data. This project relies on the concept of grid computing. Data sets to be processed are divided into smaller data sets and distributed to all participating clients, who compute and communicate the results to the distributor, which reassembles the computations into an overall data set. The current computing power of the entire BOINC grid is 20600 PetaFLOPS, distributed over nearly 0.9 million computers. SETI@home has a share of about 19.1% of this. 62 In the following, container virtualization is exploited and pre-built BOINC client images of Docker are used. These images are prefabricated containers, which are started as scalable pods on the PiCube and scale automatically in order to utilize the entire computing power of the cluster. To do this, you register with the SETI@home project and create an account. Using this account data, you generate a container application called k8s-boinc-demo and start it on the cluster with a scaling limit of 10 pod instances. In Figure 32, you can see from the Kubernetes dashboard how the pods are distributed evenly or according to workload across worker nodes rpi2 to rpi5 after launch. 63 Source: Own representation. Figure 32: Overview of the utilization, status and distribution of the pods on the nodes rpi2 to rpi5. Within the SETI@home account, we define how the individual clients or pods are utilized. The CPU utilization is left at the default value of 100% and after about 5 minutes you can see how the CPU utilization of all cluster nodes increases to 100% and remains constant at this value (see Figure 33). The cluster now computes data of the SETI@home project. Source: Own representation. Figure 33: The CPU utilization of node rpi3 at 100%. Figure 33 shows the computers currently logged in to the grid with our account information. Each pod is identified here as a single client. If we now assume that the number of PiCube clusters increases to ten, the number of pods would multiply by the same factor. With 10 pod instances per cluster, this means 100 active SETI@home clients, which could make their computing power available to the BOINC grid. Source: (SETI@home - Your Computers, 2018). Figure 34: Listing of all logged-in clients or pods in our cluster. As this use case shows, it can be inferred that the scientific utility of this cluster is without question. Using dynamic pod scaling, it is demonstrated that even when individual nodes have low performance, they can achieve high computing performance when combined as a swarm. BOINC's computing grid is a typical example of how computers, distributed around the world and connected via the Internet, can solve problems together. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:7:5","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Conclusion and outlook It is possible to construct a comparable system of a supercomputer with low costs. It is obvious that the performance of an ARM processor cannot currently keep up with commercially available CISC processors, as described in Cost efficiency and performance. However, the use case and illustration of dynamic scaling show that once distributed systems are interconnected, they develop very high parallel computing power. The Raspberry Pi cluster provides an interesting approach to perform tests or use cases in research, education, and academia. ARM architectures are evolving rapidly in terms of performance and efficiency over the next few years, with power consumption always in mind. The requirements set at the beginning of this thesis, such as energy efficiency, resilience and modularity, have been met and sensibly implemented in the construct. There is no question that this PiCube cluster is not a real competitor to massively parallelized supercomputers, but the decisive approach lies in the benefit of a computing grid. There are several ways to use this system in the future. On the one hand, its simplicity means it can be used for cluster computing to provide a small, full-featured, low-cost, energy-efficient development platform. This includes understanding its limitations in terms of performance, usability and maintainability. On the other hand, it can act as a mobile and self-sufficient cloud in a box system by adding components such as solar cells and mobile connectivity. Either way, it will remain a fascinating project that offers developers many possibilities and gives a think-out-of-the-box approach. ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:8:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Bibliography Adams, J. (September 2017). SFT Guide 14/17 - Raspberry Pi Tips, Tricks \u0026 Hacks (No. 1). Third generation: The innovations of the Raspberry Pi 3, p. 159. Baier, J. (2017). Getting Started with Kubernetes - Harness the power of Kubernetes to manage Docker deployments with ease. Birmingham: Packt Publishing. Bauke, H., \u0026 Mertens, S. (2006). Cluster computing - Practical introduction to high performance computing on Linux clusters. Heidelberg: Springer. Bedner, M. (2012). Cloud computing - technology, security and legal design. Kassel: kassel university press. Bengel, G., Baun, C., Kunze, M., \u0026 Stucky, K.-U. (2008). Master course parallel and distributed systems - fundamentals and programming of multicore processors, multiprocessors, clusters and grids. Wiesbaden: Vieweg+Teubner. Beowulf Cluster Computing. (January 12, 2018). Retrieved from MichiganTech - Beowulf Cluster Computing: http://www.cs.mtu.edu/beowulf/ Beowulf Scalable Mass Storage (T-Racks). (January 12, 2018). Retrieved from ESS Project: https://www.hq.nasa.gov/hpcc/reports/annrpt97/accomps/ess/WW80.html BOINC - Active Projects Statistics. (January 31, 2018). Retrieved from Free-DC - Distributed Computing Stats System: http://stats.free-dc.org/stats.php?page=userbycpid\u0026cpid=cfbdd0ffc5596f8c5fed01bbe619679d Cache.(January 14, 2018). Retrieved from Electronics Compendium: https://www.elektronik-kompendium.de/sites/com/0309291.htm Christl, D.,Riedel, M., \u0026 Zelend, M. (2007). Communication systems / computer networks - Research of tools for the control of a massively parallel cluster computer in the computer center of the West Saxon University of Applied Sciences Zwickau. Zwickau: Westsächsichen Hochschule Zwickau. CISC and RISC. (January 28, 2018). Retrieved from Electronics Compendium: https://www.elektronik-kompendium.de/sites/com/0412281.htm Containersvs. virtual machines. (December 13, 2017). Retrieved from NetApp Blog: https://blog.netapp.com/blogs/containers-vs-vms/ Coulouris, G.,Dollimore, J., Kindberg, T., \u0026 Blair, G. (2012). Distributed systems - concepts and design. Boston: Pearson. Dennis, A. K. (2013). Raspberry Pi super cluster. Birmingham: Packt Publishing. The science behind SETI@home. (January 30, 2018). Retrieved from SETI@home: https://setiathome.berkeley.edu/sah_about.php Docker on the Raspberry Pi with HypriotOS. (January 24, 2018). Retrieved from Raspberry Pi Geek: http://www.raspberry-pi-geek.de/Magazin/2017/12/Docker-auf-dem-Raspberry-Pi-mit-HypriotOS Eder,M. (2016). Hypervisor- vs. container-based virtualization. Munich: Technical University of Munich. Einstein@Home on Android devices. (January 23, 2018). Retrieved from GEO600: http://www.geo600.org/1282133/Einstein_Home_on_Android_devices Enable I2C Interface on the Raspberry Pi. (January 28, 2018). Retrieved from Raspberry Pi Spy: https://www.raspberrypi-spy.co.uk/2014/11/enabling-the-i2c-interface-on-the-raspberry-pi/ Encyclopedia - VAX. (January 20, 2018). Retrieved from PCmag: https://www.pcmag.com/encyclopedia/term/53678/vax Failover Cluster. (January 20, 2018). Retrieved from Microsoft Developer Network: https://msdn.microsoft.com/en-us/library/ff650328.aspx Fenner, P. (10. 12 2017). So What's a Practical Laser-Cut Clip Size? Retrieved from DefProc Engineering: https://www.deferredprocrastination.co.uk/blog/2013/so-whats-a-practical-laser-cut-clip-size/Fey, D. (2010). Grid computing - An enabling technology for computational science. Heidelberg: Springer. GitHub - flash. (January 24, 2018). Retrieved from hypriot / flash: https://github.com/hypriot/flash Goasguen, S. (2015). Docker Cookbook - Solutions and Examples for Building Dsitributed Applications. Sebastopol: O'Reilly. Grabsch, V., \u0026 Radunz, Y. (2008). Seminar presentation - Amdahl's and Gustafson's law. o.O.: Creative Commons. Herminghaus, V., \u0026 Scriba, A. (2006). Veritas Storage Foundation - High End Computing for UNIX Design and Implementation of High Availability Solutions with VxVM and VCS. Heidelberg: Springer. Hori","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:9:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Appendix A - Script: Installing Kubernetes ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:10:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Appendix B - Script: LCD display ","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:11:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Projects"],"content":"Appendix C - Script: Docker Autoscaling Dockerfile: FROM php:5-apache ADD index.php /var/www/html/index.php RUN chmod a+rx index.php index.php: \u003c?php $x = 0.0001; for ($i = 0; $i \u003c = 1000000; $i++) { $x += sqrt(\\$x); } echo \"OK!\"; ?\u003e Cf. (Grabsch \u0026 Radunz, 2008, p. 2ff) ; (Kaiser, 2009, p. 16f). ↩︎ Cf. (Bengel, Baun, Kunze, \u0026 Stucky, 2008, p. 319f) ; (Grabsch \u0026 Radunz, 2008, pp. 7-11) ↩︎ Cf. ibid. ↩︎ Cf. (Kersken, 2015, p. 197). ↩︎ Cf. (Bengel, Baun, Kunze, \u0026 Stucky, 2008, p. 2) ; (Kersken, 2015, p. 198ff). ↩︎ Cf. (Bengel, Baun, Kunze, \u0026 Stucky, 2008, p. 2f) ; (Schill \u0026 Springer, 2007, p. 31). ↩︎ Cf. ibid. ↩︎ Cf. (Bengel, Baun, Kunze, \u0026 Stucky, 2008, p. 435) ; (Fey, 2010, p. ; (Tanenbaum, 2007, p. 18f). ↩︎ Cf. (Liebel, 2011, p. 171) ; (Bengel, Baun, Kunze, \u0026 Stucky, 2008, pp. 3, 435) ; Cf. (Fey, 2010, p. 5). ↩︎ Cf. (Schill \u0026 Springer, 2007, p. 26f). . ↩︎ Cf. (Ries, 2012, p. 17). ↩︎ Cf. ibid; (Kroeker, 2011, p. 15). ↩︎ Cf. (Project list, 2018) ; (Bauke \u0026 Mertens, 2006, pp. 32-34) ; (Our home galaxy - the Milky Way, 2018) ; (If we want to Find Aliens, We Need to Save the Arecibo Telescope, 2018) ; (Einstein@Home on Android Devices, 2018). ↩︎ Cf. (New DIY supercomputer saves £1,000s, 2018). ↩︎ Cf. (Tanenbaum, 2007, p. 2) ; (Bengel, Baun, Kunze, \u0026 Stucky, 2008, p. 26). ↩︎ Cf. (Coulouris, Dollimore, Kindberg, \u0026 Blair, 2012, p. 2) ; (Tanenbaum, 2007, p. 9) ; (Coulouris, Dollimore, Kindberg, \u0026 Blair, 2012, pp. 2, 598f, 603f) ; (Tanenbaum, 2007, pp. 232, 240). ↩︎ Cf. (Coulouris, Dollimore, Kindberg, \u0026 Blair, 2012, pp. 125, 128, 464). ↩︎ Cf. (Bedner, 2012, p. 6f, 22f) ↩︎ Cf. (Coulouris, Dollimore, Kindberg, \u0026 Blair, 2012, p. 13f). . ↩︎ Cf. (Matros, 2012, p. 139) ; (Lee, 2014, p. 8ff) ; (Lobel \u0026 Boyd, 2014, p. 3f). ↩︎ Cf. (Neuenschwander, 2014, p. 4) ; (Bedner, 2012, p. 32ff) ; (PaaS or IaaS, 2018). ↩︎ Cf. (Bauke \u0026 Mertens, 2006, p. 51). ↩︎ (Pfister, 1997, p. 98). ↩︎ Cf. (Liebel, 2011, p. 169ff) ; (Schill \u0026 Springer, 2007, p. 24ff) ; (Bengel, Baun, Kunze, \u0026 Stucky, 2008, p. 415). ↩︎ Cf. (Top500 List, 2018) ; (Liebel, 2011, p. 169ff) ; (Schill \u0026 Springer, 2007, p. 24ff). ↩︎ Cf. (Ulmann, 2014, p. 51f) ; (Bauke \u0026 Mertens, 2006, p. 21). ↩︎ Cf. (Cache, 2018) ; (Bauke \u0026 Mertens, 2006, p. 21f) ; (Christl, Riedel, \u0026 Zelend, 2007, p. 5). ↩︎ Cf. (Bauke \u0026 Mertens, 2006, p. 22ff) ; (Christl, Riedel, \u0026 Zelend, 2007, p. 5). ↩︎ Cf. (Ries, 2012, p. 11) ; (Bengel, Baun, Kunze, \u0026 Stucky, 2008, pp. 190f, 205) ; (Dennis, 2013, p. 41f) ; (Bauke \u0026 Mertens, 2006, pp. 143ff, 45). ↩︎ Cf. (Liebel, 2011, p. 170f) ; (Failover Cluster, 2018). ↩︎ Cf. (Failover Cluster, 2018) ; (Liebel, 2011, p. 185f). ↩︎ Cf. (Christl, Riedel, \u0026 Zelend, 2007) ; (Bauke \u0026 Mertens, 2006, p. 52f). ↩︎ Cf. (Bauke \u0026 Mertens, 2006, p. 51f) ; (Christl, Riedel, \u0026 Zelend, 2007, p. 4) ; (Overview of Microsoft HPC Pack and SOA in Failover Cluster, 2018). ↩︎ Cf. (VMS Software, Inc. Named Exclusive Developer of Future Versions of OpenVMS Operating System., 2018) ; (Encyclopedia - VAX, 2018). ↩︎ Cf. (Load-Balanced Cluster, 2018). ↩︎ Cf. (Beowulf Cluster Computing, 2018) ; (Parallel Linux Operating System - Beowulf Gigaflop/s Workstation Project, 2018) ; (Beowulf Scalable Mass Storage (T-Racks), 2018). ↩︎ Cf. (Christl, Riedel, \u0026 Zelend, 2007, p. 9f). . ↩︎ Cf. (Bauke \u0026 Mertens, 2006, p. 9, 27f) ; (Kersken, 2015, p. 126) . ↩︎ Cf. (Coulouris, Dollimore, Kindberg, \u0026 Blair, 2012, p. 318f). . ↩︎ Cf. (Kaiser, 2009, p. 41, 43) ; (Mandl, 2010, p. 28f) ; (Eder, 2016, p. 1f). ↩︎ Cf. (Eder, 2016, p. 1ff). ↩︎ Cf. (rkt - A security-minded, standards-based container engine, 2018). ↩︎ Cf. (Nickoloff, 2016, p. 4f) ; (Goasguen, 2015, p. 1) ; (Miell \u0026 Sayers, 2016, p. 5f) ; (How nodes work, 2018) ↩︎ Cf. (Nickoloff, 2016, p. 255) ; (Goasguen, 2015, p. 199) ; (Swarm mode key concepts, 2018) ↩︎ Cf. (Nickoloff, 2016, p. 255) ; (Goasguen, 2015, p. 199) ; (Swarm mode key concepts, 2018). ↩︎ Cf. (What is Kubernetes?, 2018). ↩︎ Cf. (Nodes, 2018) ; (Pods, 2018). ↩︎ Cf. (Kubernetes Components, 2018). ↩︎ Cf. ibid. ↩︎ Cf. (CISC a","date":"Monday, Aug 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/:12:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster/"},{"categories":["Other"],"content":"Let’s create your own Azure Pipelines Agent Images or GitHub Actions Runner Images based on the official source code used for GitHub-hosted runners used for Actions, as well as for Microsoft-hosted agents used for Azure Pipelines. I prepared both Azure Pipelines and GitHub Workflow examples for you to choose from. It also can create your Virtual Machine Scale Set (VMSS) with the latest VM image to easily use for self-hosted scale-set agents in Azure DevOps. ","date":"Thursday, Sep 28, 2023","objectID":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/:0:0","tags":["github","pipeline","workflow","azure","devops","agent","runner","image","vm","vmss","scale-set","self-hosted","golden-image","packer","packerfile","packerfiles","packer-build"],"title":"Create your own official Azure Pipelines Agent Images and GitHub Actions Runner Images","uri":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/"},{"categories":["Other"],"content":"The Problem Usually when I start working with teams in new cloud environments for a longer period of time, I find the same pattern: infrastructure and application code is first tested on cloud-hosted pipeline/workflow agents/runners (because they are quickly available, low cost and reliable) and then (much too late) switched to self-hosted agents/runners (because they are more secure and isolated). With a security-first mindset, however, development and testing must start directly on self-hosted agents/runners. Not only because of security but also because of isolated network connection such as private endpoints. Another problem is that teams often create their own VM images which are not identical with the official images used for the cloud-hosted agents/runners. This can lead to problems when the application code is tested on the cloud-hosted agents/runners and then deployed to the self-hosted agents/runners. This is because the application code may not work on the self-hosted agents/runners due to differences in the VM images. This often results in downtime and/or rework of pipeline code. ","date":"Thursday, Sep 28, 2023","objectID":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/:1:0","tags":["github","pipeline","workflow","azure","devops","agent","runner","image","vm","vmss","scale-set","self-hosted","golden-image","packer","packerfile","packerfiles","packer-build"],"title":"Create your own official Azure Pipelines Agent Images and GitHub Actions Runner Images","uri":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/"},{"categories":["Other"],"content":"The Solution The solution is to create your own Azure Pipelines Agent Image or GitHub Actions Runner Image based on the exact same image used for the official GitHub-hosted runners and Microsoft-hosted agents. This way you can test your infrastructure and application code directly on self-hosted agents/runners from the beginning in a secure and isolated environment. And you can be sure that the application code will work on the cloud-hosted agents/runners as well as on the self-hosted agents/runners. If golden images are used, they could also be compiled using the predefined packerfiles or their tools could be integrated into the golden images. Another good thing is that you stay up to date with the latest official tools and versions used in the official images. ","date":"Thursday, Sep 28, 2023","objectID":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/:2:0","tags":["github","pipeline","workflow","azure","devops","agent","runner","image","vm","vmss","scale-set","self-hosted","golden-image","packer","packerfile","packerfiles","packer-build"],"title":"Create your own official Azure Pipelines Agent Images and GitHub Actions Runner Images","uri":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/"},{"categories":["Other"],"content":"Create and/or update the VMSS The solution not only creates the VM images but also can create or update the Virtual Machine Scale Set (VMSS) with the latest VM image to use for auto-scaling and self-hosted scale-set agents. This way you can use the VMSS to create your own self-hosted agents/runners. Note: By the way, the Functions.ps1 used by the pipeline and workflow can also be executed on your own local machine. Just dot-source the file and call the function Add-Image and then Add-VMSS with the required parameters. ","date":"Thursday, Sep 28, 2023","objectID":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/:3:0","tags":["github","pipeline","workflow","azure","devops","agent","runner","image","vm","vmss","scale-set","self-hosted","golden-image","packer","packerfile","packerfiles","packer-build"],"title":"Create your own official Azure Pipelines Agent Images and GitHub Actions Runner Images","uri":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/"},{"categories":["Other"],"content":"Create VM Image (Azure Pipeline) If you prefer to use Azure Pipelines, you can follow the following steps to create your image. For authentication to Azure, a service connection is ued. ","date":"Thursday, Sep 28, 2023","objectID":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/:4:0","tags":["github","pipeline","workflow","azure","devops","agent","runner","image","vm","vmss","scale-set","self-hosted","golden-image","packer","packerfile","packerfiles","packer-build"],"title":"Create your own official Azure Pipelines Agent Images and GitHub Actions Runner Images","uri":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/"},{"categories":["Other"],"content":"Pre-requisites Azure Account Azure DevOps Account Service Connection (Contributor) Optional: GitHub Account Azure Service Principal (Contributor) ","date":"Thursday, Sep 28, 2023","objectID":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/:4:1","tags":["github","pipeline","workflow","azure","devops","agent","runner","image","vm","vmss","scale-set","self-hosted","golden-image","packer","packerfile","packerfiles","packer-build"],"title":"Create your own official Azure Pipelines Agent Images and GitHub Actions Runner Images","uri":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/"},{"categories":["Other"],"content":"Prepare the Azure Pipeline to create your image Clone/Fork the repository segraef/apai. Create a new Azure Pipeline using the /.pipelines/pipeline.yml file. Create a service connection to your Azure Subscription. Run the pipeline and choose between the following image types: UbuntuMinimal (default) Ubuntu2204 Ubuntu2004 Windows2019 Windows2022 Run the pipeline Azure Pipelines Agent Image. Generate image and create the VMSS. Generated VM image, ready to use. ","date":"Thursday, Sep 28, 2023","objectID":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/:4:2","tags":["github","pipeline","workflow","azure","devops","agent","runner","image","vm","vmss","scale-set","self-hosted","golden-image","packer","packerfile","packerfiles","packer-build"],"title":"Create your own official Azure Pipelines Agent Images and GitHub Actions Runner Images","uri":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/"},{"categories":["Other"],"content":"Create VM Image (GitHub Workflow) If you prefer to use GitHub Workflows, you can follow the following steps to create your image. For authentication to Azure, a service principal is used and stored as a GitHub Actions repository secret. ","date":"Thursday, Sep 28, 2023","objectID":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/:5:0","tags":["github","pipeline","workflow","azure","devops","agent","runner","image","vm","vmss","scale-set","self-hosted","golden-image","packer","packerfile","packerfiles","packer-build"],"title":"Create your own official Azure Pipelines Agent Images and GitHub Actions Runner Images","uri":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/"},{"categories":["Other"],"content":"Pre-requisites Azure Account GitHub Account Azure Service Principal (Contributor) ","date":"Thursday, Sep 28, 2023","objectID":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/:5:1","tags":["github","pipeline","workflow","azure","devops","agent","runner","image","vm","vmss","scale-set","self-hosted","golden-image","packer","packerfile","packerfiles","packer-build"],"title":"Create your own official Azure Pipelines Agent Images and GitHub Actions Runner Images","uri":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/"},{"categories":["Other"],"content":"Prepare the GitHub Workflow to create your image Clone/Fork the repository segraef/apai. Create a new GitHub Workflow using the /.github/workflows/workflow.yml file. Create a service principal assigned with the Contributor role to your Azure Subscription. Create the GitHub Actions repository secret AZURE_CREDENTIALS with the output data from the previously created service principal. Which looks like this: { \"clientId\": \"\u003cclientId\u003e\", \"clientSecret\": \"\u003cclientSecret\u003e\", \"subscriptionId\": \"\u003csubscriptionId\u003e\", \"tenantId\": \"\u003csubscriptionId\u003e\", \"activeDirectoryEndpointUrl\": \"https://login.microsoftonline.com\", \"resourceManagerEndpointUrl\": \"https://management.azure.com/\", \"activeDirectoryGraphResourceId\": \"https://graph.windows.net/\", \"sqlManagementEndpointUrl\": \"https://management.core.windows.net:8443/\", \"galleryEndpointUrl\": \"https://gallery.azure.com/\", \"managementEndpointUrl\": \"https://management.core.windows.net/\" } Run the GitHub Runner Image workflow and choose between the following image types: UbuntuMinimal (default) Ubuntu2204 Ubuntu2004 Windows2019 Windows2022 Run the pipeline GitHub Runner Image. Generate the image. Generated VM image, ready to use. ","date":"Thursday, Sep 28, 2023","objectID":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/:5:2","tags":["github","pipeline","workflow","azure","devops","agent","runner","image","vm","vmss","scale-set","self-hosted","golden-image","packer","packerfile","packerfiles","packer-build"],"title":"Create your own official Azure Pipelines Agent Images and GitHub Actions Runner Images","uri":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/"},{"categories":["Other"],"content":"Reference Workflow You can have a look at this reference workflow: ","date":"Thursday, Sep 28, 2023","objectID":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/:5:3","tags":["github","pipeline","workflow","azure","devops","agent","runner","image","vm","vmss","scale-set","self-hosted","golden-image","packer","packerfile","packerfiles","packer-build"],"title":"Create your own official Azure Pipelines Agent Images and GitHub Actions Runner Images","uri":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/"},{"categories":["Other"],"content":"Create self-hosted scale-set agents If you let your VMSS created by the pipeline/workflow, you can use it to create your own self-hosted scale-set agents in Azure DevOps. Azure Virtual Machine Agent Scale Set Azure Virtual Machine Agent Scale Set References Self-hosted scale-set agents segraef/apai Create a service principal GitHub-hosted runners Microsoft-hosted agents ","date":"Thursday, Sep 28, 2023","objectID":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/:6:0","tags":["github","pipeline","workflow","azure","devops","agent","runner","image","vm","vmss","scale-set","self-hosted","golden-image","packer","packerfile","packerfiles","packer-build"],"title":"Create your own official Azure Pipelines Agent Images and GitHub Actions Runner Images","uri":"/create-your-own-official-azure-pipeline-agents-images-and-github-actions-runner-images/"},{"categories":["AI"],"content":"Let’s build your own AI voice assistant which is better than Amazon’s Alexa using Azure Cognitive Services (SST and TTS) and OpenAI. All you need is a bit of basic Python understanding, an Azure account, a microphone and a speaker. This will take about 15 minutes to set up and you’ll be done. For those who don’t have patience and want to get it done immediately: A working version, written in under 80 lines of code can be found here. By combining the speech recognition and synthesis capabilities of Azure with the power of OpenAI’s GPT model, we can create an intelligent and conversational voice assistant. Let me provide you step-by-step instructions and code examples to help you get started. ","date":"Sunday, Jun 11, 2023","objectID":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/:0:0","tags":["gpt","ai","azure","cognitive","openai","nlp","synthesis","language","processing","mycroft","picroft"],"title":"Build your own GPT-Powered AI Voice Assistant with Azure OpenAI and Cognitive Services","uri":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/"},{"categories":["AI"],"content":"Prerequisites Before getting started make sure you have the following prerequisites installed on your system: Create a Speech resource in Azure An Azure OpenAI Service resource with a model deployed. For more information about model deployment, see the resource deployment guide. Python (at least 3.7.1) Azure Cognitive Services Speech SDK Python libraries: openai, os, requests, json Currently, access to Azure OpenAI is granted only by application. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. # Azure Cognitive Services Speech SDK pip install azure-cognitiveservices-speech # OpenAI Python Library pip install openai ","date":"Sunday, Jun 11, 2023","objectID":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/:1:0","tags":["gpt","ai","azure","cognitive","openai","nlp","synthesis","language","processing","mycroft","picroft"],"title":"Build your own GPT-Powered AI Voice Assistant with Azure OpenAI and Cognitive Services","uri":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/"},{"categories":["AI"],"content":"Workflow Let me give you the bigger picture upfront for a better understanding what we’re going to do: The Speech-to-Text (STT) SpeechRecognizer component from Cognitive Services recognizes your speech and language and converts it into text. The OpenAI component sits in between which acts as the AI voice assistant component, it takes the input from the SpeechRecognizer and generates an intelligent response using a GPT-model. The repsonse will be synthesized accordingly into Text-To-Speech (TTS) by the SpeechSynthesizer. ","date":"Sunday, Jun 11, 2023","objectID":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/:2:0","tags":["gpt","ai","azure","cognitive","openai","nlp","synthesis","language","processing","mycroft","picroft"],"title":"Build your own GPT-Powered AI Voice Assistant with Azure OpenAI and Cognitive Services","uri":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/"},{"categories":["AI"],"content":"Setting up Azure Cognitive Services First, obtain your Azure Cognitive Services subscription key and region. Then, using the Azure Cognitive Services Speech SDK, you can configure the SpeechConfig object with your subscription key and region. For example: # Save as environment variable export cognitive_services_speech_key=\u003cYOUR_API_KEY\u003e import azure.cognitiveservices.speech as speechsdk speech_key = os.environ.get('cognitive_services_speech_key') service_region = \"australiaeast\" speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region) speech_config.speech_synthesis_voice_name = \"en-US-AshleyNeural\" This configuration allows you to access Azure’s speech recognition and synthesis capabilities. All voices can be found here. ","date":"Sunday, Jun 11, 2023","objectID":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/:3:0","tags":["gpt","ai","azure","cognitive","openai","nlp","synthesis","language","processing","mycroft","picroft"],"title":"Build your own GPT-Powered AI Voice Assistant with Azure OpenAI and Cognitive Services","uri":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/"},{"categories":["AI"],"content":"Integrating Azure OpenAI In addition to Azure Cognitive Services, we will integrate OpenAI’s GPT model to generate intelligent responses. First, ensure you have an OpenAI API key. Azure OpenAI Keys. Then, using the OpenAI Python library, you can configure the API with your key. For example: # Save as environment variable export openai_api_key=\u003cYOUR_API_KEY\u003e export openai_api_base=https://\u003cYOUR_OPENAI_SERVICE\u003e.openai.azure.com import openai openai.api_key = os.environ.get('openai_api_key') openai.api_base = os.environ.get('openai_api_base') With this integration, we can leverage the power of GPT to generate contextually relevant responses based on user input. ","date":"Sunday, Jun 11, 2023","objectID":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/:4:0","tags":["gpt","ai","azure","cognitive","openai","nlp","synthesis","language","processing","mycroft","picroft"],"title":"Build your own GPT-Powered AI Voice Assistant with Azure OpenAI and Cognitive Services","uri":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/"},{"categories":["AI"],"content":"Recognizing and Generating Speech The core functionality of our voice assistant involves recognizing user speech and generating appropriate responses. Azure Cognitive Services provides the SpeechRecognizer class for speech recognition. Here’s an example of recognizing speech from an audio input stream: # Process the recognized text speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config) result = speech_recognizer.recognize_once() if result.reason == speechsdk.ResultReason.RecognizedSpeech: recognized_text = result.text Once the user’s speech is recognized, we can use OpenAI’s GPT model to generate a response. Here’s an example of generating a response using OpenAI: # Process the generated response response = openai.Completion.create( engine=\"davinci\", prompt=recognized_text, max_tokens=100 ) generated_response = response.choices[0].text.strip() ","date":"Sunday, Jun 11, 2023","objectID":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/:5:0","tags":["gpt","ai","azure","cognitive","openai","nlp","synthesis","language","processing","mycroft","picroft"],"title":"Build your own GPT-Powered AI Voice Assistant with Azure OpenAI and Cognitive Services","uri":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/"},{"categories":["AI"],"content":"Synthesizing Speech Output To provide a natural voice output, Azure Cognitive Services offers the SpeechSynthesizer class for speech synthesis. You can synthesize the generated response into speech using the SpeechSynthesizer’s speak_text_async method. Here’s an example: # Play or save the synthesized speech speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config) result = speech_synthesizer.speak_text_async(generated_response).get() if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted: ","date":"Sunday, Jun 11, 2023","objectID":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/:6:0","tags":["gpt","ai","azure","cognitive","openai","nlp","synthesis","language","processing","mycroft","picroft"],"title":"Build your own GPT-Powered AI Voice Assistant with Azure OpenAI and Cognitive Services","uri":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/"},{"categories":["AI"],"content":"Example and Code Here is a working version written in Python. ","date":"Sunday, Jun 11, 2023","objectID":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/:7:0","tags":["gpt","ai","azure","cognitive","openai","nlp","synthesis","language","processing","mycroft","picroft"],"title":"Build your own GPT-Powered AI Voice Assistant with Azure OpenAI and Cognitive Services","uri":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/"},{"categories":["AI"],"content":"Next steps We covered the process of building a GPT-powered AI voice assistant using Azure Cognitive Services and OpenAI. The next step is to port this onto a Raspberry Pi which we equip with a speaker and microphone. More details in the next post. References Create a Speech resource OpenAI resource deployment guide. OpenAI Access Request github.com/PAi ","date":"Sunday, Jun 11, 2023","objectID":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/:8:0","tags":["gpt","ai","azure","cognitive","openai","nlp","synthesis","language","processing","mycroft","picroft"],"title":"Build your own GPT-Powered AI Voice Assistant with Azure OpenAI and Cognitive Services","uri":"/building-your-own-gpt-powered-ai-voice-assistant-with-azure-cognitive-services-and-openai/"},{"categories":["AI"],"content":"Unlock your artistic potential and create jaw-dropping AI art with these exclusive cheat codes for Midjourney. These expert hacks will take your art to new heights, no coding skills required! ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:0:0","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"Introduction Artificial intelligence (AI) is revolutionizing many industries, including the world of art. I have seen firsthand the incredible potential of artificial intelligence to create stunning and original pieces of art. From computer-generated paintings to AI-generated music, the possibilities are endless. That’s why I want to motivate you to explore the world of AI art and create something truly jaw-dropping. In this blog post, I’ll provide you with sweet and exclusive prompts to get started with creating jaw-dropping AI art. Let’s dive in and see how you can use your skills and knowledge to create something truly amazing! ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:1:0","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"Prerequisites You need Discord, should already have an account with www.midjourney.com and be familiar with how to use /imagine prompts. ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:2:0","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"Basic Prompt Commands Let’s just get straight to the point, we all know the Basic ommands: /imagine creates an image based on the prompt text you provided. It produces a grid of 4 images, taking around 50 seconds with default settings. /settings gives you an interactive window to select options. /help displays universally helpful information and tips about the Midjourney bot. /info shows information about your profile, plan, usage, and currently running jobs. /subscribe creates a unique link to the subscription page of your current Discord account, without needing to sign in on the website. but more important are /imagine parameters: --aspect or --ar changes the aspect ratio of a generation (default: --ar 1:1) --no Negative prompting, --no mushrooms would try to remove mushrooms from the image. --stylize or --s parameter influences how strongly Midjourney’s default aesthetic style is applied to Jobs. --quality \u003c.25, .5, 1, or 2\u003e, or --q \u003c.25, .5, 1, or 2\u003e defines how much rendering quality time you want to spend (default: 1) --seed \u003cinteger between 0–4294967295\u003e When using the MidJourney bot to create AI-generated art, the process begins by generating a field of visual noise, similar to television static, using a seed number. This seed number is randomly generated for each image, but can also be specified using the --seed or --sameseed parameter. By using the same seed number and prompt for multiple images, you can achieve similar ending images with subtle variations, allowing you to experiment and fine-tune your artistic vision. More parameters see Parameters List. ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:3:0","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"Multi-Prompts and Prompt Weighting To employ multi-prompts in Midjourney, insert a double colon (::) in between your prompts without any gaps. Multi-prompts are typically employed to modify the behavior of compound words in the prompts. cup cake cup:: cake Cup cake is considered as a single word. Cup and cake are considered separate words. The double colon :: is used to divide a prompt into parts. You can put a number after it to show how important that part is. Adding a number to a word is called prompt weighting or text weighting. jelly:: fish jelly::2 fish Jelly and fish are considered separate words. Jelly is twice as important as cake. More details see Multi Prompts ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:3:1","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"Prefer Suffix Since you know me and everything I do is about to Automate everything you can and should use a Suffix: /prefer command saves you having to type the options every time that you use the /imagine command I’m usually working with the suffix /prefer suffix --ar 2:3 --q 2 which always sets my aspect ratio to portrait (--ar 2:3) and uses best quality (--q 2). So next time I send a prompt like this kintsugi glacial ice::2 oil painting, palette knife texture, thick paint, wide brushstrokes, vibrant vivid colors::2 by Iris Scott, Leonid Afremov and Vincent Van Gogh::1 it adds the suffix --ar 2:3 --q 2. ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:3:2","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"Aspect Ratios ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:4:0","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"1:1 (Square, default aspect ratio) Prompt: An atomic color explosion --ar 1:1 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:4:1","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"2:3 (Portrait) Prompt: Professional black and red powder explosion, white background, 8k --ar 2:3 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:4:2","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"3:2 (Landscape) Prompt: watercolor rainbow --ar 3:2 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:4:3","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"16:9 (Wallpaper) Prompt: white porsche, rainbow neon --ar 16:9 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:4:4","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"1:2 Prompt: color chaos --ar 1:2 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:4:5","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"2:1 Prompt: Beautiful, weird and abstract painting of a dark and colourful landscape by Thomas Kinkade, Greg Rutkowski, Dan Mumford, Dan Witz, Daarken and James Gurney --ar 2:1 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:4:6","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"Cheat Codes Okay, enough basics, let’s get to the gist and give you some good examples! Note: In some prompts I used multi-prompts as well as weighting to achieve better results. ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:5:0","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"Featured Image Prompt: Close up of a person's eye, shutterstock, beautiful art, uhd, 4k, high contrast colours, beautiful image ever created, focus on iris, artistic illustration ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:5:1","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"“as this or that” Let’s say you want to have your character/style look like something you have in mind (Disney, Pixar, Superman, …), just use “... as Superman” or “... as Disney”, or even both. Prompt: Superman as a Disney character ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:5:2","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"Glitchart Prompt: infinity space:: glitchart::1 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:5:3","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"Nebula vs. Psychedelic Prompt: fluorescent mushroom:: nebula::1 Prompt: fluorescent mushroom:: psychedelic::1 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:5:4","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"Double Exposure Prompt: beautiful woman, vibrant colours and firework, double exposure ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:5:5","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"Generate new images based on your own images /imagine https://link-to-your-image.jpg \u003cyour prompt\u003e Base Image Generated Image Credits: Suren Manvelyan Prompt: http://graef.io/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney/20230306165037.png:: Celestial mist superimposed over human eye::5 close up view of eye only ::5 Note Check out the next part From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3) References Midjourney Parameters List Multi Prompts ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/:5:6","tags":["midjourney","AI","generative art","chatgpt","openai","createive AI","prompt engineering","dalle","AI art","art","hack","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-1/"},{"categories":["AI"],"content":"Welcome to the second part of guided prompting for Midjourney to unlock your artistic potential and create jaw-dropping AI art with exclusive cheat codes. These expert hacks will take your art to new heights, no coding skills required! ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:0:0","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Prerequisites Note Make sure to check out the first part From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (1/3) ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:1:0","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Cheat Codes Here are more good example prompts! Note: In some prompts I used multi-prompts as well as weighting to achieve better results. ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:0","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"32 Bit Isometric Prompt: new york city, 32 bit isometric ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:1","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Old Photograph Prompt: New York City:: old photograph::1 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:2","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Neon Prompt: New York City:: neon::1 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:3","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Oil Painting Prompt: New York City:: oil painting::1 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:4","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Cyberpunk Prompt: New York City:: cyberpunk::1 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:5","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Retrowave Prompt: Portrait of Albert Einstein:: retrowave::1 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:6","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Retro Prompt: Portrait of Albert Einstein:: retro::1 Prompt: Retro neon city background, Neon style 80s ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:7","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Blueprint Drawing Prompt: Porsche:: blueprint drawing::1 Prompt: Bugatti Veyron:: blueprint drawing::1 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:8","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Pixelart Prompt: Elvis Presley:: pixelart::1 Prompt: Elvis Presley:: pixelart::1 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:9","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Synthwave Prompt: Ominous Synthwave Backdrop --ar 3:2 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:10","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Retro-Futurism Prompt: a car driving down the road with palm trees in the background, cyberpunk art, inspired by Mike Winkelmann, lamborghini countach, colorful retrofutur, 80s photo, retro-futuristic, retro-futurism, nice sunset, lamborghini, synthwave ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:11","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"LSD Prompt: psychedelic trip:: LSD::1 ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:12","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Ukiyo-e Art Prompt: mount fuji, ukiyo-e art ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:13","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Duotone Prompt: duotone portrait of stephen hawking, pinkg and green ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:14","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Diagramatic Drwaing Prompt: Gameboy, diagramatic drawing ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:15","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Phantasmal Iridescent Prompt: Phantasmal Iridescent ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:16","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Luminescent Prompt: luminescent black water ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:17","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Fluorescence Prompt: fluorescent black water Prompt: fluorescent black water ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:18","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Carnival Glass Prompt: carnival glass ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:19","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Abstract Prompt: kintsugi glacial ice::2 oil painting, palette knife texture, thick paint, wide brushstrokes, vibrant vivid colors::2 by Iris Scott, Leonid Afremov and Vincent Van Gogh::1 Prompt: colorful fluid, a microscopic photo, by George Aleef, pexels, abstract art, oil slick nebula, technicolour 1, beautiful aesthetic, stunning screensaver, beautiful color art, acrylic pouring ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:20","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Artists Note: Aspect ratio was set to 3:2 (Landscape). Prompt: City Skyline by Doug Chiang Prompt: Sky City by Robert McCall Prompt: Illustration by Ian McQue Prompt: Landscape by Leonid Afremov Prompt: Futuristic city by Martin Deschambault Prompt: City Skyline by Kilian Eng Prompt: City by David A. Hardy Prompt: Streets of los angeles by Jeremy Mann ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:2:21","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Keyword Cheat Sheet These keywords are good to control and finetune Keyword Description Fine ultra-detailed realistic may have some graininess and roughness, but it enhances the level of detail. Ultra photorealistic is similar to fine ultra-detailed realistic in terms of its level of realism. Hasselblad H6D produces a sharper focus on the subject, but it deepens shadows. High definition lighting tends to be brighter and more colorful, with increased saturation. 8k technology often produces even more saturated and computer-generated colors than high definition, with extreme lighting effects. Cinematic lighting features more dramatic shadows and slightly thicker objects, giving a poster-like appearance. Color grading involves extreme variations in hue and vibrant but not overly saturated colors. Depth of field creates a sharp focus on the subject while blurring the foreground and background. Film lighting features limited light sources, common backlighting, and deep shadows cast by light sources. Rim lighting produces slightly stronger lighting effects than film lighting, but with very similar results. Volumetric lighting Softbox lighting Long exposure Fairy Lights Intricate details designs tend to feature non-realistic crafts and pattern elements. Realism tends to focus on artistic realism, with more uniform backgrounds and subject matter that looks like a painting. Photography typically includes a small area of objects surrounding the subject, with little else in the background. Rendered for IMAX involves more complex subjects with highly directional lighting and subdued saturation. Tilt-shift creates a similar effect to depth of field, but from a high angle or from above. Motion-blur adds speed lines to the image, creating the impression of motion or wind. 35mm film produces vibrant colors but muted saturation, with additional foreground and/or background elements for added detail. Soft focus creates a slightly blurry focus with a thinner subject and potentially grainy details, with colors tending towards pastels. Harsh lighting creates extreme contrast with deep shadows. Minimalist line art features a simplified pen-on-paper type line sketch of the subject with few or no additional elements. Hasselblad full frame produces a similar effect to 35mm film, with a stronger emphasis on contrast. Sony Alpha α7, ISO1900, Leica M to specify any lens type or camera type Unreal to specify an unreal engine feel glass vases on top of a table, microscopic photo, inspired by Brunce Munro, intricate, ultra-detailed, 8k, LSD, symmetric, refracted sparkles, ray tracing ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:3:0","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"ChatGPT + Midjourney and Prompt Generators Just let ChatGPT do the job and generate Midjourney prompts for you: Define your formula in ChatGPT. Midjourney Prompt Formulas generated by ChatGPT. Here are the formulas: Formula 1: image to prompt:: 5 descriptive keywords:: camera type:: camera lens type:: time of day:: style of photograph:: type of film Formula 2: image to prompt:: 5 descriptive keywords:: type of photography:: lens:: distance:: subject:: direction:: type of film Another really helpful tool is the MidJourney Prompt Helper which gives you amazing good style approaches. ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:4:0","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["AI"],"content":"Image2Prompt Want to know what a prompt sits behind a generated image? The CLIP Interrogator 2.1 is your friend! Note Check out the next part From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (3/3) - coming soon References Midjourney Parameters List MidJourney Prompt Helper Multi Prompts Understand Camera Lenses Types of Photography ","date":"Friday, Feb 3, 2023","objectID":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/:5:0","tags":["midjourney","AI","generative art","neural network art","digital art","chatgpt","openai","createive AI","AI Art Tutorial","prompt engineering","dalle","cheat","AI art","art","hack","AI hacks","AI Art Tools","prompt generator","photography"],"title":"From Code to Canvas: A Guide for Prompting Stunning AI Art with MidJourney (2/3)","uri":"/from-code-to-canvas-a-guide-for-prompting-stunning-ai-art-with-midjourney-2/"},{"categories":["Other"],"content":"Perform REST API requests to the HIBP API to verify if your email or password have been involved in a data breach. ","date":"Monday, Jan 23, 2023","objectID":"/how-to-rest-have-i-been-pwned-hibp-api/:0:0","tags":["automation","scripts","powershell","posh","pwsh","windows"],"title":"How to REST the Have I Been Pwned (HIBP) API","uri":"/how-to-rest-have-i-been-pwned-hibp-api/"},{"categories":["Other"],"content":"Prerequisites REST Client (VSCode) or Postman HIBP API Key I prefer integrated extensions like REST Client within my developer workspace to not have to switch between applications - Yeah I’m lazy, so what. A working version for REST calls for email, passwords and breaches can be found at segraef/Scripts/REST or my tiny HIBP web app I wrote using Python and Flask but let me give you the single snippet here to give you the idea: @account = [email protected] @key = '' # Yes you have to put your API key here @api = https://haveibeenpwned.com/api/v3 # Get all breaches for an account GET {{api}}/breachedaccount/{{account}} # API Key hibp-api-key: {{key}} ","date":"Monday, Jan 23, 2023","objectID":"/how-to-rest-have-i-been-pwned-hibp-api/:1:0","tags":["automation","scripts","powershell","posh","pwsh","windows"],"title":"How to REST the Have I Been Pwned (HIBP) API","uri":"/how-to-rest-have-i-been-pwned-hibp-api/"},{"categories":["Other"],"content":"Output The output looks like this: A more detailed and non-truncated response with breach details looks like this: GET {{api}}/breachedaccount/{{account}}?truncateResponse=false References Have I been Pwned? Understanding and Improving Your Cybersecurity Posture in 2023: The Importance of strong Passwords, 2FA and Awareness of Phishing Scams How to Create Your Own Have I Been Pwned (HIBP) API Request With Python ","date":"Monday, Jan 23, 2023","objectID":"/how-to-rest-have-i-been-pwned-hibp-api/:2:0","tags":["automation","scripts","powershell","posh","pwsh","windows"],"title":"How to REST the Have I Been Pwned (HIBP) API","uri":"/how-to-rest-have-i-been-pwned-hibp-api/"},{"categories":["TIL"],"content":"The idea is to create my own Python script performing REST API requests to the HIBP API to check if mail accounts or password show up in one of the latest breaches. And yes I was just bored and wanted to learn how to do API calls in Python. This post is about how to implement and use a Python script to check for breaches in your email addresses and passwords using the HIBP API (Have I Been Pwned). The HIBP API is a free service that allows you to check if your personal information has been compromised in a data breach. Before you begin, you will need to have the following: A HIBP API key, which can be obtained from the HIBP website (https://haveibeenpwned.com/API/Key) Python 3 installed on your machine The requests library, which can be installed by running pip install requests in your terminal To implement the script, you will need to: Import the os, json and requests libraries at the top of your script. Get the API key from the environment variable. Set the endpoint and headers for the API request. Send a GET request to the API endpoint, passing the API key in the headers. Check the status code of the response. ","date":"Monday, Jan 23, 2023","objectID":"/how-to-create-your-own-have-i-been-pwned-api-request-with-python/:0:0","tags":["hibp","python","rest","api","til"],"title":"How to Create Your Own Have I Been Pwned (HIBP) API Request With Python","uri":"/how-to-create-your-own-have-i-been-pwned-api-request-with-python/"},{"categories":["TIL"],"content":"Check if your email address was found in a breach import os, json, hashlib, requests from dotenv import load_dotenv # Load the environment variables from .env load_dotenv() # Get the API key from the environment variables API_KEY = os.getenv('HIBP_API_KEY') api_url = 'https://haveibeenpwned.com/api/v3' # Get the email address from the user input email = input(\"Enter your email address: \") # Use the API key in the headers headers = {'hibp-api-key': API_KEY} # Send the GET request to the HIBP API response = requests.get(f'{api_url}/breachedaccount/{email}', headers=headers) # Check the status code of the response if response.status_code == 404: print(\"Email not found in data breaches\") elif response.status_code != 200: print(\"Error checking email\") else: # Extract the name of the breaches from the response breaches = [breach['Name'] for breach in response.json()] print(f\"Email found in following breaches: {', '.join(breaches)}.\") ","date":"Monday, Jan 23, 2023","objectID":"/how-to-create-your-own-have-i-been-pwned-api-request-with-python/:1:0","tags":["hibp","python","rest","api","til"],"title":"How to Create Your Own Have I Been Pwned (HIBP) API Request With Python","uri":"/how-to-create-your-own-have-i-been-pwned-api-request-with-python/"},{"categories":["TIL"],"content":"Check if your password was found in a breach import os, json, hashlib, requests from dotenv import load_dotenv # Load the environment variables from .env load_dotenv() # Get the API key from the environment variables API_KEY = os.getenv('HIBP_API_KEY') pwd_api_url = 'https://api.pwnedpasswords.com/range' # Hash the password before sending it to the HIBP API password = input(\"Enter your password: \") hashed_password = hashlib.sha1(password.encode('utf-8')).hexdigest().upper() prefix = hashed_password[:5] suffix = hashed_password[5:] # Send the GET request to the HIBP API response = requests.get(f'{pwd_api_url}/{prefix}', headers=headers) # Check the status code of the response if response.status_code != 200: print(\"Error checking password\") else: # Check if the hashed password suffix exists in the response for line in response.text.splitlines(): line_suffix, count = line.split(':') if line_suffix == suffix: print(f\"Password found {count} times. Please use a different password.\") break else: print(f\"Password not found. You can use this password.\") The output of the script looks like this: References Have I been Pwned (HIBP) Understanding and Improving Your Cybersecurity Posture in 2023: The Importance of strong Passwords, 2FA and Awareness of Phishing Scams How to REST the Have I Been Pwned (HIBP) API ","date":"Monday, Jan 23, 2023","objectID":"/how-to-create-your-own-have-i-been-pwned-api-request-with-python/:2:0","tags":["hibp","python","rest","api","til"],"title":"How to Create Your Own Have I Been Pwned (HIBP) API Request With Python","uri":"/how-to-create-your-own-have-i-been-pwned-api-request-with-python/"},{"categories":["Other"],"content":"Cybersecurity is the practice of protecting internet-connected systems, including hardware, software, and data, from attack, damage, or unauthorized access. One aspect of cybersecurity is protecting personal information, such as passwords, account data but also your privacy. In this article, we revisit the basics in cyber security, as there are daily new attacks and ways hackers try to get personal data. ","date":"Monday, Jan 23, 2023","objectID":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/:0:0","tags":["automation","scripts","powershell","posh","pwsh","windows"],"title":"Understanding and Improving Your Cybersecurity Posture in 2023: The Importance of strong Passwords, 2FA and Awareness of Phishing Scams","uri":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/"},{"categories":["Other"],"content":"Password Strength The strength of a password is determined by its complexity and length. A strong password should be at least 12 characters long and include a mix of uppercase and lowercase letters, numbers, and special characters. Passwords that are shorter or consist only of easily guessable information, such as “password” or “1234,” are considered weak and easy to crack. . One tool that can be used to check the strength of a password is Have I Been Pwned (HIBP), a website that allows users to check if their email address or other personal information has been involved in a data breach. Note: If you want to verify if your email or password have been involved in a data breach you can check it using my Python/REST script or my tiny HIBQ web app I wrote using Python and Flask. Have I Been Qwned (HIBQ) web app input fields. (HIBQ) Have I Been Qwned (HIBQ) web app results page. (HIBQ) ","date":"Monday, Jan 23, 2023","objectID":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/:1:0","tags":["automation","scripts","powershell","posh","pwsh","windows"],"title":"Understanding and Improving Your Cybersecurity Posture in 2023: The Importance of strong Passwords, 2FA and Awareness of Phishing Scams","uri":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/"},{"categories":["Other"],"content":"Cracking Time The time it takes to crack a password depends on the complexity of the password and the resources available to the person trying to crack it. A simple, short password can be cracked almost instantly using a brute force attack, which involves guessing every possible combination of characters. A longer and more complex password, on the other hand, may take years to crack even with the use of specialized software and powerful computer processors. ","date":"Monday, Jan 23, 2023","objectID":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/:2:0","tags":["automation","scripts","powershell","posh","pwsh","windows"],"title":"Understanding and Improving Your Cybersecurity Posture in 2023: The Importance of strong Passwords, 2FA and Awareness of Phishing Scams","uri":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/"},{"categories":["Other"],"content":"2020 vs. 2022 Password tables comparing MD5 hashes from 2020 and 2022 cracked by an RTX 2080 GPU. (Hive Systems) ","date":"Monday, Jan 23, 2023","objectID":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/:2:1","tags":["automation","scripts","powershell","posh","pwsh","windows"],"title":"Understanding and Improving Your Cybersecurity Posture in 2023: The Importance of strong Passwords, 2FA and Awareness of Phishing Scams","uri":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/"},{"categories":["Other"],"content":"Passphrase One way to make a password stronger is to use a passphrase, which is a sequence of words or other text that is easy to remember but difficult for others to guess. Passphrases are often longer than traditional passwords, which makes them more resistant to cracking attempts. . ","date":"Monday, Jan 23, 2023","objectID":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/:3:0","tags":["automation","scripts","powershell","posh","pwsh","windows"],"title":"Understanding and Improving Your Cybersecurity Posture in 2023: The Importance of strong Passwords, 2FA and Awareness of Phishing Scams","uri":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/"},{"categories":["Other"],"content":"Brute Force Attacks A brute force attack is a method used by hackers to gain access to a password or personal information by guessing every possible combination of characters. These attacks can be automated and use specialized software to quickly guess thousands or even millions of combinations in a short amount of time. While a simple and short password can be cracked almost instantly using a brute force attack, longer and more complex passwords may still take a significant amount of time to crack, even with the use of powerful computer processors. To defend against brute force attacks, it is important to use strong and complex passwords, and to use two-factor authentication. Additionally, it is important to be aware of phishing scams and to be skeptical of unsolicited emails or messages. DES Cracker circuit board by the Electric Frontier Foundation EFF fitted with 64 Deep Crack chips using both sides. A 250,000 US$ DES cracking machine containing over 1,800 custom chips and could brute-force a DES key in a matter of days. (EFF) ","date":"Monday, Jan 23, 2023","objectID":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/:4:0","tags":["automation","scripts","powershell","posh","pwsh","windows"],"title":"Understanding and Improving Your Cybersecurity Posture in 2023: The Importance of strong Passwords, 2FA and Awareness of Phishing Scams","uri":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/"},{"categories":["Other"],"content":"Two-Factor (2FA) and Multi-Factor Authentication (MFA) Another way to protect personal information is to use two-factor authentication (2FA), which requires the user to provide two forms of identification before gaining access to an account or system. This can include a password and a fingerprint, or a password and a code sent to the user’s phone. 2FA adds an extra layer of security, as even if a hacker is able to guess or steal a password, they will not be able to access the account without the second form of identification. . Multi-Factor Authentication (MFA) is an extension of 2FA, which involves providing multiple forms of identification before gaining access to an account or system. This can include a combination of something the user knows (such as a password), something the user has (such as a token or smartphone), and something the user is (such as a fingerprint or facial recognition). MFA is considered to be even more secure than 2FA, as it adds multiple layers of protection and makes it much more difficult for a hacker to gain access to an account or system. ","date":"Monday, Jan 23, 2023","objectID":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/:5:0","tags":["automation","scripts","powershell","posh","pwsh","windows"],"title":"Understanding and Improving Your Cybersecurity Posture in 2023: The Importance of strong Passwords, 2FA and Awareness of Phishing Scams","uri":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/"},{"categories":["Other"],"content":"Phishing Scams It is also important to be aware of phishing scams, which are attempts to trick individuals into giving away their personal information. These scams can take the form of emails or text messages that appear to be from a legitimate source, such as a bank or government agency, but are actually from a hacker trying to steal personal information. To avoid falling for a phishing scam, it is important to be skeptical of unsolicited emails or messages and to not click on any links or provide any personal information unless you are certain of the source. ","date":"Monday, Jan 23, 2023","objectID":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/:6:0","tags":["automation","scripts","powershell","posh","pwsh","windows"],"title":"Understanding and Improving Your Cybersecurity Posture in 2023: The Importance of strong Passwords, 2FA and Awareness of Phishing Scams","uri":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/"},{"categories":["Other"],"content":"Conclusion In conclusion, cybersecurity is an important practice that helps to protect personal information, such as passwords, from unauthorized access. Tools like Have I Been Pwned (HIBP) can be used to check the strength of a password and determine if it has been involved in a data breach. The strength of a password is determined by its complexity and length, and the time it takes to crack a password depends on the resources available to the person trying to crack it. To protect personal information, it is important to use strong passwords and passphrases, and to use two-factor authentication. Additionally, it is important to be aware of phishing scams and to be skeptical of unsolicited emails or messages. References Hivesystems HIBQ Brute-force attack EFF Have I been Pwned (HIBP) Understanding and Improving Your Cybersecurity Posture in 2023: The Importance of strong Passwords, 2FA and Awareness of Phishing Scams How to Create Your Own Have I Been Pwned (HIBP) API Request With Python ","date":"Monday, Jan 23, 2023","objectID":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/:7:0","tags":["automation","scripts","powershell","posh","pwsh","windows"],"title":"Understanding and Improving Your Cybersecurity Posture in 2023: The Importance of strong Passwords, 2FA and Awareness of Phishing Scams","uri":"/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2fa-and-awareness-of-phishing-scams/"},{"categories":["GitHub"],"content":"You can use GitHub Pages to host a website about yourself, your organization, or your project directly from a repository on GitHub.com. Hugo takes Markdown, runs them through a theme template, and spits out HTML files that you can easily deploy online – extremely fast. Why GitHub Pages? GitHub Pages is a static site hosting service that takes HTML, CSS, and JavaScript files straight from a repository on GitHub, optionally runs the files through a build process, and publishes a website. You can see examples of GitHub Pages sites in the GitHub Pages examples collection. You can host your site on GitHub’s github.io domain or your own custom domain. For more information, see Using a custom domain with GitHub Pages. Why Hugo and not Gatsby or Jekyll? A static site generator like Hugo takes Markdown content files, runs them through a theme template, and spits out HTML files that you can easily deploy online – and it does all of this extremely fast. Primarily because of speed, performance, simplicity, easy setup and most importantly the templates look generally better. More details see Gatsby vs. Jekyll vs. Hugo Quick Start Hugo Quick Start ","date":"Sunday, Dec 11, 2022","objectID":"/create-your-static-page-with-github-pages/:0:0","tags":["github","pages","hugo","markdown","theme","templates"],"title":"Create a Static Web Page With Github Pages and Hugo","uri":"/create-your-static-page-with-github-pages/"},{"categories":["GitHub"],"content":"Windows choco install hugo-extended -confirm ","date":"Sunday, Dec 11, 2022","objectID":"/create-your-static-page-with-github-pages/:1:0","tags":["github","pages","hugo","markdown","theme","templates"],"title":"Create a Static Web Page With Github Pages and Hugo","uri":"/create-your-static-page-with-github-pages/"},{"categories":["GitHub"],"content":"Linux and macOS brew install hugo ","date":"Sunday, Dec 11, 2022","objectID":"/create-your-static-page-with-github-pages/:2:0","tags":["github","pages","hugo","markdown","theme","templates"],"title":"Create a Static Web Page With Github Pages and Hugo","uri":"/create-your-static-page-with-github-pages/"},{"categories":["GitHub"],"content":"Commands Following are the most importang hugo commands you need, of course there are more but you don’t need them for now, otherwise just type hugo help. hugo config – displays the configuration for a Hugo site hugo new – lets you create new content for your site hugo server – starts a local development server hugo version – displays the current Hugo version Hugo Configuration File Let’s have a look at the first rows of the config.toml on my page which is a bit bigger than usual since it uses settings in regards to the theme I’m using but if yiou read through it just makes sense. Here the first few lines which give you an idea: baseURL = \"https://graef.io/\" defaultContentLanguage = \"en\" languageCode = \"en\" title = \"Automate everything \" enableRobotsTXT = true theme = \"loveit\" Hugo Templates You can find plenty of awesome templates here Hugo Templates. The template used for this page is called LoveIt and can be found here. Another greate starter theme I can recommend is Coder. How to add a Theme to your Hugo Site Once you’ve choosen a theme and cloned (or submoduled) it into your /themes/ directory make sure to set it in your config.toml theme = \"\u003ctheme_name\u003e\" Test and preview your Hugo Site hugo server -D Hugo will then build your site’s pages and make them available at http://localhost:1313/. How to add Posts to your Hugo Site hugo new posts/2021/2022-11-12-sample-post.md which creates an empty markdown file with metadata like this --- title: \"2022 12 12 Sample Post\" date: 2022-11-12T12:24:28+06:00 draft: true --- Create a GitHub Pages Site After you’re done with creating your site locally you can upload it to GitHub Pages. More details see Creating a GitHub Pages Site. Continous Deployment of GitHub Pages with GitHub Actions Details see my blog post Continously Deploy Your Github Pages Site With Github Actions. References Creating a GitHub Pages site Gatsby vs. Jekyll vs. Hugo Hugo Templates LoveIt Theme Coder Theme Creating a GitHub Pages Site ","date":"Sunday, Dec 11, 2022","objectID":"/create-your-static-page-with-github-pages/:3:0","tags":["github","pages","hugo","markdown","theme","templates"],"title":"Create a Static Web Page With Github Pages and Hugo","uri":"/create-your-static-page-with-github-pages/"},{"categories":["Windows"],"content":"Sometimes you have to setup your Laptop/Notebook fresh and also prepare your developer environment and tools you need. I noticed this again today when I rebooted my machine and spent almost 2 hours installing the tools I need to work. I have put together a list of commands you can use to prepare your environment using Windows Package Manager (WinGet) for you which gives you a kick start at the next reset and you can continue working directly after 10 minutes. Prerequisites Windows 11 (If Windows 10 then install winget via App Installer first) Commands # Visual Studio Code winget install Microsoft.VisualStudioCode # Azure Tools winget install Microsoft.AzureStorageExplorer winget install Microsoft.Bicep winget install Microsoft.AzureCLI # PowerShell 7 winget install Microsoft.PowerShell # Windows Terminal (if on Windows 10) winget install Microsoft.WindowsTerminal # Git and GitHub CLI winget install Git.Git winget install GitHub.cli git config --global user.name \"John Doe\" git config --global user.email [email protected] # Azure PowerShell pwsh (Make sure to start your terminal session with PowerShell 7) Install-Module Az # Visual Code Extensions code --install-extension eamodio.gitlens code --install-extension telesoho.vscode-markdown-paste-images code --install-extension ms-azuretools.vscode-bicep code --install-extension ms-azuretools.vscode-docker code --install-extension ms-vscode-remote.remote-containers code --install-extension ms-vscode-remote.remote-ssh code --install-extension ms-vscode-remote.remote-wsl code --install-extension ms-vscode.azurecli code --install-extension ms-vscode.powershell Other Commands # Chocolatey Invoke-Expression ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) # Hugo choco install hugo -confirm # AzCopy Invoke-WebRequest -Uri \"https://aka.ms/downloadazcopy-v10-windows\" -OutFile AzCopy.zip -UseBasicParsing Expand-Archive ./AzCopy.zip ./AzCopy -Force mkdir \"$home/AzCopy\" Get-ChildItem ./AzCopy/*/azcopy.exe | Move-Item -Destination \"$home\\AzCopy\\AzCopy.exe\" $userenv = [System.Environment]::GetEnvironmentVariable(\"Path\", \"User\") [System.Environment]::SetEnvironmentVariable(\"PATH\", $userenv + \";$home\\AzCopy\", \"User\") # Other Visual Code Extensions code --install-extension AzurePolicy.azurepolicyextension code --install-extension ms-azuretools.vscode-azureresourcegroups code --install-extension ms-azuretools.vscode-azurestorage code --install-extension ms-azuretools.vscode-azurevirtualmachines code --install-extension ms-dotnettools.vscode-dotnet-runtime code --install-extension ms-vscode-remote.remote-ssh-edit code --install-extension ms-vscode-remote.remote-ssh-explorer code --install-extension ms-vscode-remote.vscode-remote-extensionpack code --install-extension ms-vscode.azure-account code --install-extension ms-vscode.vscode-node-azure-pack code --install-extension ms-vsliveshare.vsliveshare code --install-extension ms-vsonline.vsonline code --install-extension msazurermtools.azurerm-vscode-tools MacOS /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\" brew install --cask powershell Add-Content -Path $PROFILE.CurrentUserAllHosts -Value '$(/opt/homebrew/bin/brew shellenv) | Invoke-Expression' pwsh Install-Module Az brew install --cask keepassxc References Install winget via App Installer ","date":"Friday, May 13, 2022","objectID":"/fresh-development-machine-setup-using-winget-within-10-minutes/:0:0","tags":["winget","automation","scripts","powershell","posh","pwsh","windows"],"title":"Fresh Development Setup within 10 Minutes using Winget","uri":"/fresh-development-machine-setup-using-winget-within-10-minutes/"},{"categories":["Crypto"],"content":"I created my own crypto Token and I want to show you how to create your own - It only takes about 15-30 minutes. Note First of all, we are NOT going to create a cryptocurrency. We are going to create a cryptocurrency Token. If you want to learn more about the differences please check out my other post about basic Crypto Tokenomics. ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:0:0","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Why should you create your own Token? Custom Tokens can represent an investor’s stake in a business or they can serve an economic purpose like a payment instrument. As a token holder, you can use them as a means of payment, utility or trade them with other securities to make a profit. But the reason we’re creating it is simply to better understand the process behind the blockchain technology - Learning by doing. And simply because it’s fun to learn something new in the world of cryptocurrencies. ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:1:0","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Requirements Computer (Windows, Linux or MacOS) Browser (Edge, Chrome or Firefox) Solana Tool Suite GitHub Account Phantom Wallet (min. 0.1-0.3 $SOL to cover transaction fees) Internet Brain ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:2:0","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Workflow Here is a short overview of what we’re going to do and to give you the bigger picture: ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:3:0","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Set up your Solana Wallet Let’s start with setting up your Solana Wallet first. Before you begin Make sure you have installed the Solana Command Line Tools. MacOS \u0026 Linux Open your Terminal and copy and paste the following command sh -c \"$(curl -sSfL https://release.solana.com/v1.9.3/install)\" Windows Open PowerShell or a Command Prompt (cmd.exe) as an Administrator and copy and paste the following command curl https://release.solana.com/v1.9.3/solana-install-init-x86_64-pc-windows-msvc.exe --output C:\\temp\\solana-install-init.exe --create-dirs Release Feel free to replace v1.9.3 with the release tag matching your desired release version, or just use any of the symbolic channel names: stable, beta, or edge. Confirm you have the desired version of solana installed by entering: solana --version ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:4:0","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Generate a File System Wallet Keypair solana-keygen new --outfile keypair.json Generate a new Wallet Keypair. The content of the keypair.json file looks like this: [53,182,131,247,119,117,227,207,112,73,170,126,222,197,244,99,215,107,255,202,33,43,36,17,104,111,157,246,196,192,174,95,240,23,238,206,118,215,154,238,229,96,11,37,156,123,51,223,5,231,17,117,86,136,103,14,75,95,175,132,148,54,1,13] It’s basically an array of 64 values, the first 32 represent the private key private_key_bytes = [53,182,131,247,119,117,227,207,112,73,170,126,222,197,244,99,215,107,255,202,33,43,36,17,104,111,157,246,196,192,174,95] public_key_bytes = [240,23,238,206,118,215,154,238,229,96,11,37,156,123,51,223,5,231,17,117,86,136,103,14,75,95,175,132,148,54,1,13] Remember Your keypair.json file is unencrypted. Do not share this file with others. Remember, and as we learned in Hot and cold Crypto Wallet (Address), a wallet address like shown in the output above, is just a Base58Check Encoded Public Key Hash. That’s why your wallet address 8mNvt36N7bW3vuWJ3pFDTSWFp2i7fD1MF8bv6mTFMj8f looks different than your public_key_bytes. ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:4:1","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Set Solana Config By default your RPC URL should be set to https://api.mainnet-beta.solana.com already but here is the command to verify it: solana config get If it’s not set to the Mainnet not you can set it with this command: solana config set --url https://api.mainnet-beta.solana.com solana config set --keypair keypair.json ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:4:2","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Verify your Public Key (Optional) You can also verify if your public key belongs to your key pair with the following command: solana-keygen verify 8mNvt36N7bW3vuWJ3pFDTSWFp2i7fD1MF8bv6mTFMj8f .\\keypair.json Verify if your public key comes from this Keypair. ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:4:3","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Import into your Phantom Wallet (Optional) Copy the contents of your keypair.json and hit import on your Phantom Wallet. Import into your Phantom Wallet. ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:4:4","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Top up your wallet Let’s top up your wallet address you just created with some $SOLto cover the transaction fees. Top up your Wallet. If you imported your wallet address into Phantom you can check the balance there or you can use solana balance to check: solana balance Check your balance. ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:5:0","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Create Token Now that we have sent some $SOL to our wallet address we can start creating our actual Token Address, which we will later use to mint fresh Tokens and send them to our Token Account which we will also create in the same process. ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:6:0","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Create Token and Token Account spl-token create-token Token Address: 2LTQripZZZXekBg5311zu4zdzKr7VwiQ81RzVL62S72q Create your Token. spl-token create-account 2LTQripZZZXekBg5311zu4zdzKr7VwiQ81RzVL62S72q Token Account: CB7WEx4wtFiy6ftJWbaSvBfw1pbxe3wq65DjEbWmGRve Create your Token Account. Note Your Token Account is associated with your Token Address. ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:6:1","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Mint Token Account We’re going to mint 1.000.000 tokens (out of thin air) with this token address 2LTQripZZZXekBg5311zu4zdzKr7VwiQ81RzVL62S72q and then going to send them to our token account CB7WEx4wtFiy6ftJWbaSvBfw1pbxe3wq65DjEbWmGRve. spl-token mint 2LTQripZZZXekBg5311zu4zdzKr7VwiQ81RzVL62S72q 1000000 CB7WEx4wtFiy6ftJWbaSvBfw1pbxe3wq65DjEbWmGRve ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:6:2","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Check your Token Account Balance spl-token accounts Token Account Balance. ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:6:3","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Limit Token Supply Let’s Limit your supply to prevent unlimited minting. spl-token authorize 2LTQripZZZXekBg5311zu4zdzKr7VwiQ81RzVL62S72q mint --disable ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:7:0","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Transfer token to a browser wallet spl-token transfer --fund-recipient tokenAddress transferAmount recipientAddress spl-token transfer --fund-recipient 2LTQripZZZXekBg5311zu4zdzKr7VwiQ81RzVL62S72q 1000000 F7tHkHNUkM2R3w2A6fyVDK29m9y8oNx851nhYoa4SuRp ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:8:0","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Register your Token To actually finish the creation of your Token we want to have it officially listed on the Solana Registry. I am not going into details here, so feel free to check out my pull request at github.com/solana-labs/token-list for reference. But basically the only details you need for your PR are: \"address\": \"2LTQripZZZXekBg5311zu4zdzKr7VwiQ81RzVL62S72q\", \"symbol\": \"\u003cyourTokenSymbol\u003e\", \"name\": \"\u003cyourTokenName\u003e\", \"logoURI\": \"https://raw.githubusercontent.com/\u003cyourLogoURI\u003e.png\", Once your PR was merged you can see your token officiall listed on your Phantom wallet and the official solana.com registry (see solscan.io or Solana Explorer) Token Listing ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:9:0","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["Crypto"],"content":"Summary Okay, let’s summarize the whole thing again and show what we have done. We have created the following: We created a Wallet Address: 8mNvt36N7bW3vuWJ3pFDTSWFp2i7fD1MF8bv6mTFMj8f to be used as a so called Authority to fund the creation but also to mint our Token. We created a Token (Address): 2LTQripZZZXekBg5311zu4zdzKr7VwiQ81RzVL62S72q which is then associated to our Token Account: CB7WEx4wtFiy6ftJWbaSvBfw1pbxe3wq65DjEbWmGRve. In the end we registered our Token 2LTQripZZZXekBg5311zu4zdzKr7VwiQ81RzVL62S72q on the Solana Token Registry to make it available. If you want to know how to create a secure wallet address, check out my post Create a secure and anonymous Crypto Wallet. References Solana Token Program Create a secure and anonymous Crypto Wallet Hot and Cold Crypto Wallet (Address) Crypto Token, Coins and Mnemonics Generate Bitcoin Address Generate Ethereum Address How Jason Bourne Stores His Bitcoin Trusted Third Parties are Security Holes ","date":"Sunday, Jan 9, 2022","objectID":"/create-your-own-cryptocurrency-token/:10:0","tags":["crypto","wallet","solana","token","mint","blockchain"],"title":"Create your own Cryptocurrency Token (Solana)","uri":"/create-your-own-cryptocurrency-token/"},{"categories":["TIL"],"content":"Today I learned how to reload/restart my window in Visual Studio Code. This is especially helpful if you want to restart Visual Studio and leave all your current terminals open. Open the command palette (Ctrl + Shift + P) and execute the command \u003eReload Window. Reload Window Note However, it is usually not enough to make newly added environment variables available in your new terminal sessions - this still requires a complete restart of Visual Studio Code. This, by the way, reloads the path variables in the specific terminal instance without restarting VSCode. $env:Path = [System.Environment]::GetEnvironmentVariable(\"Path\",\"Machine\") + \";\" + [System.Environment]::GetEnvironmentVariable(\"Path\",\"User\") But I think restarting VSCode is way more faster than typing this command. ","date":"Friday, Jan 7, 2022","objectID":"/how-to-reload-vscode-window/:0:0","tags":["vscode","reload","restart","terminal"],"title":"How to reload your VSCode Window","uri":"/how-to-reload-vscode-window/"},{"categories":["Crypto","TIL"],"content":"Today I learned how to explain the differences between a hot and cold crypto wallet and a wallet address. ","date":"Tuesday, Jan 4, 2022","objectID":"/hot-and-cold-wallet-address/:0:0","tags":["crypto","wallet","coins","hash","bitcoin","nano ledger"],"title":"Hot and cold Crypto Wallet (Address)","uri":"/hot-and-cold-wallet-address/"},{"categories":["Crypto","TIL"],"content":"Wallet A wallet is a collection of private keys, like a key ring. It holds copies of each private key and each private key’s corresponding address. A private key is necessary to spend from an address. Note Wallets contain keys, not coins. Each user has a wallet containing keys. These keys in turn can prove that the output (coins) of a blockchain transaction belong to a user. Wallets are really keychains containing pairs of private/public keys. Users sign transactions with the keys, thereby proving they own the transaction outputs (their coins). The coins are stored on the blockchain in the form of transaction-outputs. ","date":"Tuesday, Jan 4, 2022","objectID":"/hot-and-cold-wallet-address/:1:0","tags":["crypto","wallet","coins","hash","bitcoin","nano ledger"],"title":"Hot and cold Crypto Wallet (Address)","uri":"/hot-and-cold-wallet-address/"},{"categories":["Crypto","TIL"],"content":"Wallet Address An address is a public key to which transactions can be sent. To be accurate, an address represents a hash of a public key of an asymmetric key pair (public and private key). Figure 1: A wallet address represents a hash of a public key. Wallet Address (Hash): 0x207BC0f0C4E20C806299BE54ceca0a5b9cf07602 Public Key: 0x0268968b2ffdf391178374346875a54974054cfb15166a176bf93dac67d861606e Private Key: 0x9b3f56cfbed5889ead6b11a038cc10141c9247edddc80b2556f4703048ee126b Source: Mastering Bitcoin, A. Antonopoulos, 2017). Figure 2: Public key to bitcoin address: conversion of a public key into a bitcoin address. ","date":"Tuesday, Jan 4, 2022","objectID":"/hot-and-cold-wallet-address/:2:0","tags":["crypto","wallet","coins","hash","bitcoin","nano ledger"],"title":"Hot and cold Crypto Wallet (Address)","uri":"/hot-and-cold-wallet-address/"},{"categories":["Crypto","TIL"],"content":"Hot Wallet A hot wallet is a tool that allows to receive and send tokens/coins. Compared to a cold wallet it’s faster and makes it easier to trade or spend crypto since it’s connected to the internet. ","date":"Tuesday, Jan 4, 2022","objectID":"/hot-and-cold-wallet-address/:2:1","tags":["crypto","wallet","coins","hash","bitcoin","nano ledger"],"title":"Hot and cold Crypto Wallet (Address)","uri":"/hot-and-cold-wallet-address/"},{"categories":["Crypto","TIL"],"content":"Cold Wallet A cold wallet is not connected to the internet and therefore stands a far lesser risk of being compromised. Cold wallets can also be referred to as offline (paper) wallets or hardware (USB) wallets. References Create a secure and anonymous Crypto Wallet Cold Crypto Wallets and MITM Attacks Crypto Token, Coins and Mnemonics Mastering Bitcoin Hot Wallet? ","date":"Tuesday, Jan 4, 2022","objectID":"/hot-and-cold-wallet-address/:2:2","tags":["crypto","wallet","coins","hash","bitcoin","nano ledger"],"title":"Hot and cold Crypto Wallet (Address)","uri":"/hot-and-cold-wallet-address/"},{"categories":["Crypto","TIL"],"content":"Today I learned to simply explain the difference between a crypto token, coin and a mnemonic. ","date":"Monday, Jan 3, 2022","objectID":"/crypto-token-coins-and-mnemonics/:0:0","tags":["crypto","wallet","NFT","btc","coins","DeFi","bitcoin","altcoin","stablecoin"],"title":"Crypto Tokens, Coins and Mnemonics","uri":"/crypto-token-coins-and-mnemonics/"},{"categories":["Crypto","TIL"],"content":"Token The term Token describes all cryptocurrencies apart from Bitcoin and Ether(eum), although these are also tokens in principle. In the second meaning, it describes certain digital crypto-assets that are built on top of another cryptocurrency blockchain like DeFi Tokens, Non-Fungible Tokens (NFTs), Governance Tokens or Security Tokens. ","date":"Monday, Jan 3, 2022","objectID":"/crypto-token-coins-and-mnemonics/:1:0","tags":["crypto","wallet","NFT","btc","coins","DeFi","bitcoin","altcoin","stablecoin"],"title":"Crypto Tokens, Coins and Mnemonics","uri":"/crypto-token-coins-and-mnemonics/"},{"categories":["Crypto","TIL"],"content":"Coin A crypto coin is the native coin of a blockchain like Bitcoin or Ether, which is then used to trade currency or store value. ","date":"Monday, Jan 3, 2022","objectID":"/crypto-token-coins-and-mnemonics/:2:0","tags":["crypto","wallet","NFT","btc","coins","DeFi","bitcoin","altcoin","stablecoin"],"title":"Crypto Tokens, Coins and Mnemonics","uri":"/crypto-token-coins-and-mnemonics/"},{"categories":["Crypto","TIL"],"content":"Altcoin Altcoin means any cryptocurrency that is not Bitcoin. Since Bitcoin has always dominated the entire cryptocurrency market, it also dominates the discussion about cryptocurrencies and therefore any other coin is called an altcoin. It was also the first cryptocurrency to be issued, and it introduced the concepts of blockchain and cryptocurrency. ","date":"Monday, Jan 3, 2022","objectID":"/crypto-token-coins-and-mnemonics/:2:1","tags":["crypto","wallet","NFT","btc","coins","DeFi","bitcoin","altcoin","stablecoin"],"title":"Crypto Tokens, Coins and Mnemonics","uri":"/crypto-token-coins-and-mnemonics/"},{"categories":["Crypto","TIL"],"content":"Stable Coin A stablecoin is a cryptocurrency where the value of each coin is tied to an external asset such as USD or EUR. To be correct, each stablecoin is also an altcoin. Thus, the value of a stablecoin can be pegged to any asset. Because of the fixation of one asset to another means that both values are always in the same relationship. ","date":"Monday, Jan 3, 2022","objectID":"/crypto-token-coins-and-mnemonics/:2:2","tags":["crypto","wallet","NFT","btc","coins","DeFi","bitcoin","altcoin","stablecoin"],"title":"Crypto Tokens, Coins and Mnemonics","uri":"/crypto-token-coins-and-mnemonics/"},{"categories":["Crypto","TIL"],"content":"Mnemonic A Secret Recovery Phrase, mnemonic phrase, or Seed Phrase is a set of typically either 12 or 24 words, basically a group of easy to remember words. These words are pulled from a specific list of 2048 words known as the BIP39 wordlist. It serves as a backup to recover your wallet and coins in the event your wallet becomes compromised, lost, or destroyed. Notice If someone has the mnemonic phrase he/she or it can recover your public and private key and access your wallet. Never share your recovery/mnemonic phrase and make sure to store it in a safe place. References Create a secure and anonymous Crypto Wallet Cold Crypto Wallets and MITM Attacks Hot and Cold Crypto Wallet (Address) Mastering Bitcoin Hot Wallet? What is a token? ","date":"Monday, Jan 3, 2022","objectID":"/crypto-token-coins-and-mnemonics/:3:0","tags":["crypto","wallet","NFT","btc","coins","DeFi","bitcoin","altcoin","stablecoin"],"title":"Crypto Tokens, Coins and Mnemonics","uri":"/crypto-token-coins-and-mnemonics/"},{"categories":["Crypto"],"content":"Are my cold wallet and the generated addresses really secure? I would like to familiarise you with the security topics of cold wallets and what you should pay attention to. ","date":"Sunday, Jan 2, 2022","objectID":"/cold-crypto-wallets-and-mitm-attacks/:0:0","tags":["crypto","wallet","btc","man-in-the-middle","mitm","nano ledger"],"title":"Cold Crypto Wallets and MITM Attacks","uri":"/cold-crypto-wallets-and-mitm-attacks/"},{"categories":["Crypto"],"content":"Cold Wallet A wallet is a collection of private keys, like a key ring. It holds copies of each private key and each private key’s corresponding address. A private key is necessary to spend from an address. Other than a hot wallet, a cold wallet is not connected to the internet and therefore stands a far lesser risk of being compromised. Cold wallets can also be referred to as offline (paper) wallets or hardware (USB) wallets. ","date":"Sunday, Jan 2, 2022","objectID":"/cold-crypto-wallets-and-mitm-attacks/:1:0","tags":["crypto","wallet","btc","man-in-the-middle","mitm","nano ledger"],"title":"Cold Crypto Wallets and MITM Attacks","uri":"/cold-crypto-wallets-and-mitm-attacks/"},{"categories":["Crypto"],"content":"Man-in-the-Middle Attack (MITM) A man-in-the-middle attack (MITM) is a general term for when a perpetrator infiltrates a conversation between a user and an application to either eavesdrop or impersonate one of the two parties to make it appear that a normal exchange of information is underway. Source: Man-in-the-middle attack, Wikipedia. Figure 1: Illustration of a MITM attack. ","date":"Sunday, Jan 2, 2022","objectID":"/cold-crypto-wallets-and-mitm-attacks/:2:0","tags":["crypto","wallet","btc","man-in-the-middle","mitm","nano ledger"],"title":"Cold Crypto Wallets and MITM Attacks","uri":"/cold-crypto-wallets-and-mitm-attacks/"},{"categories":["Crypto"],"content":"Case 1 Let’s assume you generate a wallet address (hashed public and private key) via bitaddress.org or myetherwallet.com and during the generation or transmission a MITM attack occurs, be it through JavaScript hijacking, SSL offloading, key/screen logging or even compromised hardware. This key pair would thus be compromised and insecure, as the attacker would possess both key pairs or, in any case, the private key. Figure 2: Illustration of a MITM attack of a soft wallet. ","date":"Sunday, Jan 2, 2022","objectID":"/cold-crypto-wallets-and-mitm-attacks/:3:0","tags":["crypto","wallet","btc","man-in-the-middle","mitm","nano ledger"],"title":"Cold Crypto Wallets and MITM Attacks","uri":"/cold-crypto-wallets-and-mitm-attacks/"},{"categories":["Crypto"],"content":"Case 2 Let’s assume you use a hardware wallet like a Nano Ledger S/X or BitBox, which generates the public and private key for you using the manufacturer’s software and uses a recovery mnemonic/phrase as the seed. Key/screen logging would also be fatal here and keys and mnemonic could be caputred and hence would be visible to attackers. Figure 2: Illustration of a MITM attack of a cold/hardware wallet. ","date":"Sunday, Jan 2, 2022","objectID":"/cold-crypto-wallets-and-mitm-attacks/:4:0","tags":["crypto","wallet","btc","man-in-the-middle","mitm","nano ledger"],"title":"Cold Crypto Wallets and MITM Attacks","uri":"/cold-crypto-wallets-and-mitm-attacks/"},{"categories":["Crypto"],"content":"Conclusion Ultimately, the only option is to manually create your own address using (BIP32/BIP39/BIP38/BIP44) on a secure, offline and trusted device, i.e. not a mobile phone or workstation with internet. Figure 3: Illustration of a MITM attack of a soft wallet. If I had several higher 6/7-digit amounts in Ethereum and Bitcoin and I wanted to make sure for newly created addresses that mnemonic and private keys were not seen by any other person from the time of generation and safekeeping, then I think this is definitely a safer way than just quickly generating an address via app or online. Of course, there are far more paranoid ways, but I don’t think I’m far off the mark. If you want to know how to create a secure wallet address, check out my post Create a secure and anonymous Crypto Wallet. References Create a secure and anonymous Crypto Wallet Hot and Cold Crypto Wallet (Address) Crypto Token, Coins and Mnemonics Generate Bitcoin Address Generate Ethereum Address How Jason Bourne Stores His Bitcoin Trusted Third Parties are Security Holes ","date":"Sunday, Jan 2, 2022","objectID":"/cold-crypto-wallets-and-mitm-attacks/:5:0","tags":["crypto","wallet","btc","man-in-the-middle","mitm","nano ledger"],"title":"Cold Crypto Wallets and MITM Attacks","uri":"/cold-crypto-wallets-and-mitm-attacks/"},{"categories":["Crypto"],"content":"Creating a wallet address (Bitcoin/Ethereum) is easy, but is it really secure? In this article we look at a secure but also anonymous approach using tools like Live USB BitKey to create your own anonymous crypto wallet you can trust. As soon as you get deeper into the crypto world, you realize that it is essential to take a closer look at the issues of security and anonymity and not to rely on others. Especially when the amounts and number of your transactions increase. ","date":"Saturday, Jan 1, 2022","objectID":"/create-a-secure-and-anonymous-wallet-address/:0:0","tags":["crypto","wallet","btc","secure","anonymous","privacy","bitcoin","paper wallet","cold wallet","bitkey","coinbase","satoshi","liveusb"],"title":"Create a secure and anonymous Crypto Wallet","uri":"/create-a-secure-and-anonymous-wallet-address/"},{"categories":["Crypto"],"content":"Are you sure your wallet is secure from hackers? Let’s say you’re going to transfer $120,000 from your wallet to another wallet Can you say with 100% certainty that no one really has access to your wallet or your private key? Are you sure there is no middle man active? Sending transactions on decentralised blockchains using the Bitcoin or Ethereum protocol are great, but they require even more conscientious handling due to the absence of a central authority. Since there is no bank or liquidity provider that can or will control your transaction in the event of a mistake, you are solely responsible for handling your own money and digital crypto-assets with more care than usual. Note When you deposit money at a bank, you let them worry about the security, right? However, the keys to your transactions and coins are stored on your computer and that means you are fully responsible for securing them. ","date":"Saturday, Jan 1, 2022","objectID":"/create-a-secure-and-anonymous-wallet-address/:1:0","tags":["crypto","wallet","btc","secure","anonymous","privacy","bitcoin","paper wallet","cold wallet","bitkey","coinbase","satoshi","liveusb"],"title":"Create a secure and anonymous Crypto Wallet","uri":"/create-a-secure-and-anonymous-wallet-address/"},{"categories":["Crypto"],"content":"Are blockchain transactions really anonymous? A blockchain is a public ledger, which means all transactions are publicly viewable. When you make a transaction, your wallet address and the transaction details are recorded in the blockchain. Note As long as there is no link between your wallet address and your identity, your transaction stays anonymous. However, as soon as a connection is made between your address and your identity - let’s say your IP address, your email address or anything else identifiable - your cover is blown. And why? Because from that point on, anybody can link your address to every transaction you’ve ever made on the blockchain. ","date":"Saturday, Jan 1, 2022","objectID":"/create-a-secure-and-anonymous-wallet-address/:2:0","tags":["crypto","wallet","btc","secure","anonymous","privacy","bitcoin","paper wallet","cold wallet","bitkey","coinbase","satoshi","liveusb"],"title":"Create a secure and anonymous Crypto Wallet","uri":"/create-a-secure-and-anonymous-wallet-address/"},{"categories":["Crypto"],"content":"Create a secure wallet (the quick \u0026 easy way) Go to Coinbase.com or binance.com. Click create Wallet. Done. No, that’s exactly the reason why I’m writing this article. Since it is so easy to create a wallet, people are also too easy with their wallets. Let it be things like writing your seed phrase on a piece of paper and lose it writing your seed phrase on a digital note (Notepad, Notes) and don’t secure it not enabling multi-factor authentication not enabling password protection on your soft wallets ","date":"Saturday, Jan 1, 2022","objectID":"/create-a-secure-and-anonymous-wallet-address/:3:0","tags":["crypto","wallet","btc","secure","anonymous","privacy","bitcoin","paper wallet","cold wallet","bitkey","coinbase","satoshi","liveusb"],"title":"Create a secure and anonymous Crypto Wallet","uri":"/create-a-secure-and-anonymous-wallet-address/"},{"categories":["Crypto"],"content":"Security concerns you should be aware of It’s important to sched some light on some major security concerns and you must be aware of common threats, such as Modern operating systems are highly complex which lead to large attack surfaces and constant leak of information without the user’s knowledge or consent Duping users through fake cryptocurrencies Phishing methods and scams Knowing the confidential lock PIN code of your phone Any other attempts to steal your cryptographic keys Wallets can be hacked by using old password backups (even if you think you’re safe while your password is frequently being changed) Sybil attacks ","date":"Saturday, Jan 1, 2022","objectID":"/create-a-secure-and-anonymous-wallet-address/:3:1","tags":["crypto","wallet","btc","secure","anonymous","privacy","bitcoin","paper wallet","cold wallet","bitkey","coinbase","satoshi","liveusb"],"title":"Create a secure and anonymous Crypto Wallet","uri":"/create-a-secure-and-anonymous-wallet-address/"},{"categories":["Crypto"],"content":"Security Precautions you should take care of Make sure to Enable Multi-Factor Authentication (MFA) Enable Password Protection and don’t use the same password everywhere and for everything Backup your wallet at multiple secure locations (like on a USB or on your hard drive) Keep your computer (Mac, Windows or Linux) up to date with latest security updates and patches Be careful when opening emails (Desktop or Mobile), as phishing scams are becoming more and more sophisticated Generally use a VPN like NordVPN or ExpressVPN Avoid public Wi-Fi’s Your browser (Chrome/Edge/Mozilla) indicates the sites you’re browsing are encrypted with SSL Double-check the address of the recipient before sending any payments ","date":"Saturday, Jan 1, 2022","objectID":"/create-a-secure-and-anonymous-wallet-address/:3:2","tags":["crypto","wallet","btc","secure","anonymous","privacy","bitcoin","paper wallet","cold wallet","bitkey","coinbase","satoshi","liveusb"],"title":"Create a secure and anonymous Crypto Wallet","uri":"/create-a-secure-and-anonymous-wallet-address/"},{"categories":["Crypto"],"content":"Create a secure wallet (the more secure way) Since we’re going to create a cold wallet you should know cold wallets can also be referred to as Offline Wallets like paper, NFC chips or any data storage medium Cold Storage Wallets are hardware (USB) devices like Nano Ledger or Trezor Note Hardware wallets, however, can also be called offline wallets, since they do not have direct access to the Internet and vice versa. Air-Gap Before you start, remember to disconnect any LAN cables from your computer/laptop. Download the BitKey ISO. Create a bootable USB drive with Rufus. Start/Boot BitKey from your USB drive. Bootscreen BitKey Remove your BitKey USB boot device. Insert a USB drive to save your data on it later. Bootscreen BitKey Now you can decide to choose any Wallet Generator you want to use to generate your address(es) Remember Do not try to make use of your phone camera to make a photo of your generated keys since this breaks the air-gap. Your mobile phone is usually always connected to the internet. Electrum Bitaddress.org Warp Wallet Coinb.in Once you generated your mnemonic seed phrase and/or keys, write them down on to a physical piece of paper or store them on your USB drive. References Cold Crypto Wallets and MITM Attacks Hot and Cold Crypto Wallet (Address) Crypto Token, Coins and Mnemonics BIP39 wordlist Why Storing Bitcoin in a Single Wallet is a Bad Idea Extract Private Key from Ledger Mnemonic Get/Generate private keys from mnemonic or create own Mnemonic Generate Bitcoin Address Generate Ethereum Address Bitkey Live USB Trusted Third Parties are Security Holes The closest you can get to perfectly secure Bitcoin transactions (without doing them in your head) Cold Wallets (offline and hardware) and MITM attacks Why you need several Crypto Wallets ","date":"Saturday, Jan 1, 2022","objectID":"/create-a-secure-and-anonymous-wallet-address/:4:0","tags":["crypto","wallet","btc","secure","anonymous","privacy","bitcoin","paper wallet","cold wallet","bitkey","coinbase","satoshi","liveusb"],"title":"Create a secure and anonymous Crypto Wallet","uri":"/create-a-secure-and-anonymous-wallet-address/"},{"categories":["TIL","GitHub"],"content":"Use this nice PowerShell and Bash one-liner to automatically clean up (delete) your local branch once your remote branch is deleted (merged). ","date":"Monday, Sep 20, 2021","objectID":"/clean-up-local-and-remote-branches/:0:0","tags":["git","fetch","prune","branch","powershell","bash"],"title":"Automatically clean up local once remote branch is deleted","uri":"/clean-up-local-and-remote-branches/"},{"categories":["TIL","GitHub"],"content":"One-liner (PowerShell) git checkout main; git pull; git remote update origin --prune; git branch -vv | Select-String -Pattern \": gone]\" | % { $_.toString().Trim().Split(\" \")[0]} | % { git branch -D $_ } That’s how it looks like: ","date":"Monday, Sep 20, 2021","objectID":"/clean-up-local-and-remote-branches/:1:0","tags":["git","fetch","prune","branch","powershell","bash"],"title":"Automatically clean up local once remote branch is deleted","uri":"/clean-up-local-and-remote-branches/"},{"categories":["TIL","GitHub"],"content":"Step by step (PowerShell) git checkout main git pull git remote update origin --prune git branch -vv | Select-String -Pattern \": gone]\" | % { $_.toString().Trim().Split(\" \")[0]} | % { git branch -D $_ } ","date":"Monday, Sep 20, 2021","objectID":"/clean-up-local-and-remote-branches/:2:0","tags":["git","fetch","prune","branch","powershell","bash"],"title":"Automatically clean up local once remote branch is deleted","uri":"/clean-up-local-and-remote-branches/"},{"categories":["TIL","GitHub"],"content":"One-liner (bash) git checkout main; git pull; git remote update origin --prune; git branch -vv | grep ': gone]' | awk '{print $1}' | xargs git branch -D That’s how it looks like: ","date":"Monday, Sep 20, 2021","objectID":"/clean-up-local-and-remote-branches/:3:0","tags":["git","fetch","prune","branch","powershell","bash"],"title":"Automatically clean up local once remote branch is deleted","uri":"/clean-up-local-and-remote-branches/"},{"categories":["TIL","GitHub"],"content":"Step by step (bash) git checkout main git pull git remote update origin --prune git branch -vv | grep ': gone]' | awk '{print $1}' | xargs git branch -D Update: Added this little PowerShell snippet with which you can clean up all your local branches that are already merged into main. Faster than doing it manually one by one. Param( [string]$destinationFolder = \".\" ) $repos = Get-ChildItem -Path $destinationFolder -Directory foreach ($repo in $repos) { $($repo.FullName) Set-Location \"$($repo.FullName)\" git checkout main git pull git remote update origin --prune git branch -vv | Select-String -Pattern \": gone]\" | % { $_.toString().Trim().Split(\" \")[0]} | % { git branch -D $_ } } Added References git-fetch git prune git checkout ","date":"Monday, Sep 20, 2021","objectID":"/clean-up-local-and-remote-branches/:4:0","tags":["git","fetch","prune","branch","powershell","bash"],"title":"Automatically clean up local once remote branch is deleted","uri":"/clean-up-local-and-remote-branches/"},{"categories":["TIL"],"content":"If you want to start with your fork from scratch while remaining the current upstream/main as your base use this. There is a difference between Syncing a fork and Cleaning up a fork. ","date":"Thursday, Sep 9, 2021","objectID":"/clean-up-and-sync-your-git-fork/:0:0","tags":["git","github","fork","upstream","git push","git remote"],"title":"Clean Up and Sync Your Git Fork","uri":"/clean-up-and-sync-your-git-fork/"},{"categories":["TIL"],"content":"Clean up your fork from the command line git remote add upstream /url/to/original/repo git fetch upstream git checkout main git reset --hard upstream/main git push -f You can also just delete your fork on the web UI and create a new fork. ","date":"Thursday, Sep 9, 2021","objectID":"/clean-up-and-sync-your-git-fork/:1:0","tags":["git","github","fork","upstream","git push","git remote"],"title":"Clean Up and Sync Your Git Fork","uri":"/clean-up-and-sync-your-git-fork/"},{"categories":["TIL"],"content":"Syncing a fork from the command line git remote add upstream /url/to/original/repo git fetch upstream git checkout main git merge upstream/main GitHub You gives you also the opportunity to sync a fork from the web UI. References Git Basics - Working with Remotes Syncing a fork Syncing a fork from the web UI ","date":"Thursday, Sep 9, 2021","objectID":"/clean-up-and-sync-your-git-fork/:2:0","tags":["git","github","fork","upstream","git push","git remote"],"title":"Clean Up and Sync Your Git Fork","uri":"/clean-up-and-sync-your-git-fork/"},{"categories":["TIL"],"content":" Today I learned How to find and replace with a newline in Visual Studio Code. In the local searchbox (Ctrl + F) you can insert newlines by pressing Ctrl + Enter. (Local searchbox) If you use the global search (Ctrl + Shift + F) you can insert newlines by pressing Shift + Enter. (Global search) If you want to search for multilines by the character literal, remember to check the rightmost regex icon. (Regex) References Visual Studio Code Keyboard Shortcuts Download Visual Studio Code (Mac, Linux, Windows) ","date":"Friday, Aug 6, 2021","objectID":"/find-and-replace-with-a-newline-in-visual-studio-code/:0:0","tags":["Automation","VSCode","find replace","ctrlf"],"title":"Find and replace with a newline in Visual Studio Code","uri":"/find-and-replace-with-a-newline-in-visual-studio-code/"},{"categories":["TIL"],"content":" Today I learned Visual Studio Code Keyboard Shortcuts for Windows, macOs and Linux. Visual Studio Code lets you perform most tasks directly from the keyboard. This page lists out the default bindings (keyboard shortcuts). ","date":"Monday, Aug 2, 2021","objectID":"/visual-studio-code-keyboard-shortcuts/:0:0","tags":["VSCode","keyboard","shortcuts","Windows","MacOS","Linux"],"title":"Visual Studio Code Keyboard Shortcuts","uri":"/visual-studio-code-keyboard-shortcuts/"},{"categories":["TIL"],"content":"Keyboard Shortcuts for Windows Keyboard Shortcuts for Windows ","date":"Monday, Aug 2, 2021","objectID":"/visual-studio-code-keyboard-shortcuts/:1:0","tags":["VSCode","keyboard","shortcuts","Windows","MacOS","Linux"],"title":"Visual Studio Code Keyboard Shortcuts","uri":"/visual-studio-code-keyboard-shortcuts/"},{"categories":["TIL"],"content":"Keyboard Shortcuts for macOS Keyboard Shortcuts for macOS ","date":"Monday, Aug 2, 2021","objectID":"/visual-studio-code-keyboard-shortcuts/:2:0","tags":["VSCode","keyboard","shortcuts","Windows","MacOS","Linux"],"title":"Visual Studio Code Keyboard Shortcuts","uri":"/visual-studio-code-keyboard-shortcuts/"},{"categories":["TIL"],"content":"Keyboard Shortcuts for Linux Keyboard Shortcuts for Linux References Key Bindings for Visual Studio Code Visual Studio Code Keyboard shortcuts for Windows Visual Studio Code Keyboard shortcuts for macOS Visual Studio Code Keyboard shortcuts for Linux ","date":"Monday, Aug 2, 2021","objectID":"/visual-studio-code-keyboard-shortcuts/:3:0","tags":["VSCode","keyboard","shortcuts","Windows","MacOS","Linux"],"title":"Visual Studio Code Keyboard Shortcuts","uri":"/visual-studio-code-keyboard-shortcuts/"},{"categories":["TIL","PowerShell"],"content":" Today I learned How to get a list of PowerShell Aliases. An alias is an alternate name for a cmdlet, function, executable file, including scripts. PowerShell includes a set of built-in aliases. You can add your own aliases to the current session and to your PowerShell profile. # Cmdlet Get-Alias # Display Alias names (Get-Alias).DisplayName # Display Alias name and definition Get-Alias | select Name, Definition Which gives you the following Outputs Output Name Definition ---- ---------- ? Where-Object % ForEach-Object ac Add-Content cat Get-Content cd Set-Location chdir Set-Location clc Clear-Content clear Clear-Host clhy Clear-History cli Clear-Item clp Clear-ItemProperty cls Clear-Host clv Clear-Variable cnsn Connect-PSSession compare Compare-Object copy Copy-Item cp Copy-Item cpi Copy-Item cpp Copy-ItemProperty cvpa Convert-Path dbp Disable-PSBreakpoint del Remove-Item diff Compare-Object dir Get-ChildItem dnsn Disconnect-PSSession ebp Enable-PSBreakpoint echo Write-Output epal Export-Alias epcsv Export-Csv erase Remove-Item etsn Enter-PSSession exsn Exit-PSSession fc Format-Custom fhx Format-Hex fl Format-List foreach ForEach-Object ft Format-Table fw Format-Wide gal Get-Alias gbp Get-PSBreakpoint gc Get-Content gcb Get-Clipboard gci Get-ChildItem gcm Get-Command gcs Get-PSCallStack gdr Get-PSDrive gerr Get-Error ghy Get-History gi Get-Item gin Get-ComputerInfo gjb Get-Job gl Get-Location gm Get-Member gmo Get-Module gp Get-ItemProperty gps Get-Process gpv Get-ItemPropertyValue group Group-Object gsn Get-PSSession gsv Get-Service gtz Get-TimeZone gu Get-Unique gv Get-Variable h Get-History history Get-History icm Invoke-Command iex Invoke-Expression ihy Invoke-History ii Invoke-Item ipal Import-Alias ipcsv Import-Csv ipmo Import-Module irm Invoke-RestMethod iwr Invoke-WebRequest kill Stop-Process ls Get-ChildItem man help md mkdir measure Measure-Object mi Move-Item mount New-PSDrive move Move-Item mp Move-ItemProperty mv Move-Item nal New-Alias ndr New-PSDrive ni New-Item nmo New-Module nsn New-PSSession nv New-Variable ogv Out-GridView oh Out-Host popd Pop-Location ps Get-Process pushd Push-Location pwd Get-Location r Invoke-History rbp Remove-PSBreakpoint rcjb Receive-Job rcsn Receive-PSSession rd Remove-Item rdr Remove-PSDrive ren Rename-Item ri Remove-Item rjb Remove-Job rm Remove-Item rmdir Remove-Item rmo Remove-Module rni Rename-Item rnp Rename-ItemProperty rp Remove-ItemProperty rsn Remove-PSSession rv Remove-Variable rvpa Resolve-Path sajb Start-Job sal Set-Alias saps Start-Process sasv Start-Service sbp Set-PSBreakpoint scb Set-Clipboard select Select-Object set Set-Variable shcm Show-Command si Set-Item sl Set-Location sleep Start-Sleep sls Select-String sort Sort-Object sp Set-ItemProperty spjb Stop-Job spps Stop-Process spsv Stop-Service start Start-Process stz Set-TimeZone sv Set-Variable tee Tee-Object type Get-Content where Where-Object wjb Wait-Job write Write-Output References Get-Alias about_Alias_Provider ","date":"Sunday, Aug 1, 2021","objectID":"/get-powershell-aliases/:0:0","tags":["TIL","PowerShell","scripts","get-alias","alias"],"title":"How to get a list of PowerShell Aliases","uri":"/get-powershell-aliases/"},{"categories":["Projects"],"content":"The goal of this work is to create an affordable, energy efficient and portable mini-supercomputer. Ideally, a cluster computer with little or no carbon footprint, individual elements that are inexpensive to replace, and a portable system that can be easily disassembled and reassembled. ","date":"Friday, Jul 23, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/:0:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (4/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/"},{"categories":["Projects"],"content":"Evaluation After the cluster has been configured and Kubernetes is ready for use, the entire system is evaluated. The requirements set in advance (see Requirements) are critically reviewed, target and actual values are evaluated, and various comparisons are made. This prototype is ideally suited to illustrate and substantiate the research question addressed in this thesis. Section Use case SETI@home deals with a use case that provides information about the extent to which and the purpose for which this custom cluster construction can contribute to scientific research. ","date":"Friday, Jul 23, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/:1:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (4/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/"},{"categories":["Projects"],"content":"Scalability Kubernetes enables easy scaling of applications or pods and the applications within them. Enabled autoscaling scales pods according to their required resources up to a specified limit. In the following example, we test the autoscaling of our cluster using a simple web application and verify the automatic scalability of the system: 1 For this purpose, a Docker image based on a simple web server application is used. This Docker image contains a single web page, which causes a maximum CPU load by simulated users when called. The Docker image is launched, causing a web server to run with the corresponding web page as a pod. Kubernetes' autoscaler is enabled with the following configuration: 2 KUBE_AUTOSCALER_MIN_NODES = 1: The minimum number of nodes to be used is one. KUBE_AUTOSCALER_MAX_NODES = 4: The maximum number of nodes to be used is four. The fifth node thus retains the function as master. KUBE_ENABLE_CLUSTER_AUTOSCALER = true: Activation of the autoscaler. The pod with the web server container is started with the following properties: CPU-PERCENT = 50: This configuration value specifies the value that is maintained to keep all pods at an average CPU load of 50 percent. As soon as a pod requires more than half of its available computing power, another pod instance is automatically replicated or an exact copy of the running pod is created. MIN = 1: At least one Pod is used for scaling. MAX = 10: A maximum of ten replicas of a pod can be used for scaling. After the pod is started and user load is simulated, an increase in CPU load to 250 percent is observed and that seven pods have already been swapped out or scaled: $ kubectl get hpa NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE php-apache deployment/php-apache/scale 50% 250% 1 10 2m $ kubectl get deployment php-apache NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE php-apache 7 7 3 4m $ kubectl get pods NAME RUNNING STATUS AVAILABLE AGE php-apache-3046963998-3ewo6 0/1 Pending 0 1m php-apache-3046963998-8m03k 1/1 Running 0 1m php-apache-3046963998-ddpgp 1/1 Running 0 5m php-apache-3046963998-lrik6 1/1 Running 0 1m php-apache-3046963998-nj465 0/1 Pending 0 1m php-apache-3046963998-tmwg1 1/1 Running 0 1m php-apache-3046963998-xkbw1 0/1 Pending 0 1m Listing 5: Observing CPU utilization increase to 250% while pods are swapped out to provide resources. Two minutes after the user load simulation stops, the CPU utilization drops back to zero and the drop from seven to one pod can be seen. $ kubectl get hpa NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE php-apache deployment/php-apache/scale 50% 0% 1 10 7m $ kubectl get deployment php-apache NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE php-apache 1 1 1 9m $ kubectl get pods NAME RUNNING STATUS AVAILABLE AGE php-apache-3046963998-ddpgp 1/1 Running 0 9m Listing 6: Watching CPU utilization drop to zero percent and drop to one pod. As can be seen (see Listing 1), it is very easy to dynamically adjust the number of pods to the load, by enabling the cluster autoscaler. Automatic and dynamic scaling can also be very helpful when there are irregularities in cluster utilization. For example, development-related clusters or continuous computing operations can be run on weekends or at night. Compute-intensive applications can be better scheduled so that a cluster is optimally utilized. In all cases, the cluster can be used optimally. Either reduce the number of unused nodes to save energy or scale to the limit to provide enough computing power. Depending on which case occurs, a dynamically scaling cluster ensures that at high or low utilization, all tasks are solved in the most efficient way. ","date":"Friday, Jul 23, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/:1:1","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (4/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/"},{"categories":["Projects"],"content":"Fail-safe As with each element of our cluster, we ensure that more than one instance of each component is running simultaneously. Setting the number to five physical nodes per enclosure is one reason for creating the possibility of optimal scaling and appropriate resilience. The greater the number of nodes, the less likely there is to be a total failure or bottleneck in scaling capabilities. Load balancing and availability are closely related to cluster resilience. One way to ensure that a master node is highly available is to allow a worker node to step in as master. Kubernetes already inherently brings the function that as soon as a master node fails, a worker takes its place. The active/passive design is already in use. Second, this failover implementation is active for all worker nodes. Figure 28 shows how load balancing and failover work together to keep this cluster alive in the event of a master as well as worker node failure. The actors here are the cluster components etcd, kube-apiserver, kube-controller and kube-scheduler (see Google Kubernetes). The Kubernetes API server, controller, and scheduler all run inside Kubernetes as pods. This means that in the event of a failure, each of these pods will be moved to a different node, thus preserving the core services of the cluster. Here, one considers potential failure scenarios: Loss of the master node: If not configured for HA, loss of the master node or its services will have a severe impact on the application. The cluster will not be able to respond to commands or deploy nodes. Each service in the master node is critical and is configured appropriately for HA, so a worker node automatically steps in as the master. Loss of worker nodes: Kubernetes is able to automatically detect and repair pods. Depending on how the services are balanced, there may be an impact on the end users of the application. If any pods on a node are not responding, kubelet detects this and informs the master to use a different pod. Network failure: The master and worker nodes in a Kubernetes cluster can become unreachable due to network failures. In some cases, they are treated as node failures. In this case, other nodes are used accordingly to replace the respective node that is unreachable. Source: Own representation. Figure 30: Failure and takeoverscenario of the master node rpi1 and the worker node rpi3. Kubernetes is configured to be highly available to tolerate the failure of one or more master nodes and up to four worker nodes. This is critical for running development or production environments in Kubernetes. An odd number of nodes is chosen so that it is possible to keep the cluster alive even with only one node. A cluster node can continue to operate in the last instance once a node is only acting as a master and worker. In this state, the cluster is neither highly available, fail-safe, nor completely resilient, but it survives. ","date":"Friday, Jul 23, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/:1:2","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (4/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/"},{"categories":["Projects"],"content":"Cost efficiency and performance Looking at the costs, there are two sides to the story. With the total acquisition costs, minus the contributed effort to assemble, install and configure the cluster, the value of the investment is exactly 358.79 EUR. This construct is contrasted with a simple Linux cluster for comparison. This Linux cluster consists of commercially available PCs with comparable core data to that of the Raspberry Pi nodes. Raspberry Pi 3 Model B Linux PC CPU Cortex-A53 1.2 GHz Quadcore Intel Celeron J1900 2 GHz Quadcore RAM 1024 MB 4 GB RAM Network 100 Mbps 1000 Mbps Current Consumption max. 4 Watt / h max. 10 Watt / h Price per computer approx. 35 € approx. 95 € Total price approx. 175 € approx. 475 € Table 2: Cost comparison of the core components of Raspberry Pi and Linux PC. If we now compare the core data, we can see that the comparison system is definitely associated with higher acquisition costs. It is important to mention that this comparison primarily focuses on the costs and not the performance of the individual systems. It is obvious that a Linux PC, based on a CISC processor architecture, definitely achieves higher FLOPS than an ordinary ARM processor. Nevertheless, it becomes clear in the first approach that, despite the lower performance, there is a significant difference in terms of cost. Especially when performance per watt is calculated. Due to the compact design of the Raspberry Pi and the accommodation of all components, such as the integrated power supply and GPU, it is a competitive partner for the Linux PC. In relation to the costs, the question arises how the comparison systems perform in terms of performance. For this, a performance test is performed with the help of sysbench. Sysbench is a benchmark application that quickly gives an impression of the system performance. For this purpose, a CPU benchmark is run on both systems, which calculates all prime numbers up to 20000, and the results are shown in Table 3 and Figure 31. Raspberry Pi 3 Model B Linux PC CPU benchmark Prime number calulation Prime number calculation Threads (process parts) 4 4 Limit 20000 20000 Calculation time 115.1536 seconds 11.2800 seconds Table 3: Comparison times of the prime number calculation up to 20000. Source: Own representation. Figure 31: Results of the sysbench benchmark run on node rpi2. The difference in the calculation time is clearly visible. There is a difference of 104 seconds. According to the visible comparisons and as already mentioned in the cost comparison, there is no question that a Linux PC based on the CISC architecture has a higher CPU performance than the Raspberry Pi with an ARM architecture. ","date":"Friday, Jul 23, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/:1:3","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (4/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/"},{"categories":["Projects"],"content":"Energy efficiency and cooling ARM processors, such as those installed on the Raspberry Pi, have a high energy efficiency with a clock frequency of 1.2 GHz and a power consumption of max. 4 watts. The average power consumption is about 2 watts in idle mode. The switch consumes 2.4 watts at 0.2 amps of current and 12 volts. The total wattage output of the USB charger is made up of all components and sits at the end of the power chain. Summarizing with all installed components, you get a total power consumption of 13.75 watts in idle and 34 watts at maximum load (see Table 4). Power consumption Component Idle maximum Raspberry Pi 3 Model B 2 Watt 4 Watt Edimax ES-5800G V2 Gigabit Switch (8-Port) 2.4 watt 2.4 watt LCD display module 1602 HD44780 with TWI controller 0.1 watt 0.1 watt Antec TRICOOL 92mm 4-pin case fan 1.25 watt 1.25 watt Anear 60 Watt USB Charger (6-Port) - - Total power consumption 13,75 Watt 23,75 Watt Table 4: Total power consumption of the PiCube in idle mode and maximum CPU load of 100%. In the following test, CPU load is generated using the Sysbench prime calculation program and advantages and disadvantages are shown by using active cooling and passive cooling elements. We read system values such as temperature, clock frequency and voltage using the following commands in each case: 3 vcgencmd measure_temp vcgencmd scaling_cur_freq vcgencmd measure_volts core Listing 7: Commands for querying temperature, clock frequency and voltage. In the following, we look at three temperature curves in the case. The CPU clock frequency is 1.2 GHz and the CPU voltage is 1.325 volts over a period of 5 minutes: Temp1: In case, without heatsink on SoC, without active cooling. Temp2: In case, with heatsink on SoC, without active cooling. Temp3: In case, with heatsink on SoC, with active cooling. **CPU utilization (%) Temp1 (°C) Temp2 (°C) Temp3 (°C) 0 39 39 44 100 77 77 82 Table 5: Measured values of heat generation without active cooling. Next, we look at three temperature profiles of a Raspberry Pi processor. The CPU clock frequency is 1.2 GHz and the CPU voltage is 1.325 volts over a period of 10 minutes: Temp1: CPU, without heat sink on SoC, without active cooling. Temp2: CPU, with heat sink on SoC, without active cooling. Temp3: CPU, with heat sink on SoC, with active cooling. **CPU utilization (%) Temp1 (°C) Temp2 (°C) Temp3 (°C) 0 44 32,2 27,8 100 83,3 83,3 69,8 Table 6: Measured values The heat development of the circuit boards of each individual computer is also taken into account. Although this is low, it increases constantly with the number of nodes installed in the case. The heat development is about 35 degrees Celsius with an average load of a single board. With 5 nodes, this is already around 38 degrees, which corresponds to a factor of around 1.08 per node. If all 5 nodes are overclocked by increasing the processor's clock frequency, this factor increases to 1.1. Temperature differences of 10 degrees in the case and the processor prove that the maximum performance of all hardware nodes cannot be exploited without appropriate cooling. Passive heat sinks and an already implemented active cooling with the help of a case fan can help here. There is no question that the optimized airflow inside the case also contributes to the improved cooling performance. ","date":"Friday, Jul 23, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/:1:4","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (4/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/"},{"categories":["Projects"],"content":"Use Case SETI@home After evaluating the cluster, we turn to a use case from the scientific domain. A current PRC project of BOINC is SETI@home, a scientific experiment run by the University of California at Berkeley that uses computers connected to the Internet in the search for extraterrestrial intelligence. One participates by running a free client program on a computer that downloads and analyzes radio telescope data. This project relies on the concept of grid computing. Data sets to be processed are divided into smaller data sets and distributed to all participating clients, who compute and communicate the results to the distributor, which reassembles the computations into an overall data set. The current computing power of the entire BOINC grid is 20600 PetaFLOPS, distributed over nearly 0.9 million computers. SETI@home has a share of about 19.1% of this. 4 In the following, container virtualization is exploited and pre-built BOINC client images of Docker are used. These images are prefabricated containers, which are started as scalable pods on the PiCube and scale automatically in order to utilize the entire computing power of the cluster. To do this, you register with the SETI@home project and create an account. Using this account data, you generate a container application called k8s-boinc-demo and start it on the cluster with a scaling limit of 10 pod instances. In Figure 32, you can see from the Kubernetes dashboard how the pods are distributed evenly or according to workload across worker nodes rpi2 to rpi5 after launch. 5 Source: Own representation. Figure 32: Overview of the utilization, status and distribution of the pods on the nodes rpi2 to rpi5. Within the SETI@home account, we define how the individual clients or pods are utilized. The CPU utilization is left at the default value of 100% and after about 5 minutes you can see how the CPU utilization of all cluster nodes increases to 100% and remains constant at this value (see Figure 33). The cluster now computes data of the SETI@home project. Source: Own representation. Figure 33: The CPU utilization of node rpi3 at 100%. Figure 33 shows the computers currently logged in to the grid with our account information. Each pod is identified here as a single client. If we now assume that the number of PiCube clusters increases to ten, the number of pods would multiply by the same factor. With 10 pod instances per cluster, this means 100 active SETI@home clients, which could make their computing power available to the BOINC grid. Source: (SETI@home - Your Computers, 2018). Figure 34: Listing of all logged-in clients or pods in our cluster. As this use case shows, it can be inferred that the scientific utility of this cluster is without question. Using dynamic pod scaling, it is demonstrated that even when individual nodes have low performance, they can achieve high computing performance when combined as a swarm. BOINC's computing grid is a typical example of how computers, distributed around the world and connected via the Internet, can solve problems together. ","date":"Friday, Jul 23, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/:1:5","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (4/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/"},{"categories":["Projects"],"content":"Conclusion and outlook It is possible to construct a comparable system of a supercomputer with low costs. It is obvious that the performance of an ARM processor cannot currently keep up with commercially available CISC processors, as described in Cost efficiency and performance. However, the use case and illustration of dynamic scaling show that once distributed systems are interconnected, they develop very high parallel computing power. The Raspberry Pi cluster provides an interesting approach to perform tests or use cases in research, education, and academia. ARM architectures are evolving rapidly in terms of performance and efficiency over the next few years, with power consumption always in mind. The requirements set at the beginning of this thesis, such as energy efficiency, resilience and modularity, have been met and sensibly implemented in the construct. There is no question that this PiCube cluster is not a real competitor to massively parallelized supercomputers, but the decisive approach lies in the benefit of a computing grid. There are several ways to use this system in the future. On the one hand, its simplicity means it can be used for cluster computing to provide a small, full-featured, low-cost, energy-efficient development platform. This includes understanding its limitations in terms of performance, usability and maintainability. On the other hand, it can act as a mobile and self-sufficient cloud in a box system by adding components such as solar cells and mobile connectivity. Either way, it will remain a fascinating project that offers developers many possibilities and gives a think-out-of-the-box approach. Check out the full project Conception, Construction and Evaluation of a Reaspberry Pi Cluster (1/4) Conception, Construction and Evaluation of a Reaspberry Pi Cluster (2/4) Full project see Conception, Construction and Evaluation of a Reaspberry Pi Cluster ","date":"Friday, Jul 23, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/:2:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (4/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/"},{"categories":["Projects"],"content":"Bibliography Adams, J. (September 2017). SFT Guide 14/17 - Raspberry Pi Tips, Tricks \u0026 Hacks (No. 1). Third generation: The innovations of the Raspberry Pi 3, p. 159. Baier, J. (2017). Getting Started with Kubernetes - Harness the power of Kubernetes to manage Docker deployments with ease. Birmingham: Packt Publishing. Bauke, H., \u0026 Mertens, S. (2006). Cluster computing - Practical introduction to high performance computing on Linux clusters. Heidelberg: Springer. Bedner, M. (2012). Cloud computing - technology, security and legal design. Kassel: kassel university press. Bengel, G., Baun, C., Kunze, M., \u0026 Stucky, K.-U. (2008). Master course parallel and distributed systems - fundamentals and programming of multicore processors, multiprocessors, clusters and grids. Wiesbaden: Vieweg+Teubner. Beowulf Cluster Computing. (January 12, 2018). Retrieved from MichiganTech - Beowulf Cluster Computing: http://www.cs.mtu.edu/beowulf/ Beowulf Scalable Mass Storage (T-Racks). (January 12, 2018). Retrieved from ESS Project: https://www.hq.nasa.gov/hpcc/reports/annrpt97/accomps/ess/WW80.html BOINC - Active Projects Statistics. (January 31, 2018). Retrieved from Free-DC - Distributed Computing Stats System: http://stats.free-dc.org/stats.php?page=userbycpid\u0026cpid=cfbdd0ffc5596f8c5fed01bbe619679d Cache.(January 14, 2018). Retrieved from Electronics Compendium: https://www.elektronik-kompendium.de/sites/com/0309291.htm Christl, D.,Riedel, M., \u0026 Zelend, M. (2007). Communication systems / computer networks - Research of tools for the control of a massively parallel cluster computer in the computer center of the West Saxon University of Applied Sciences Zwickau. Zwickau: Westsächsichen Hochschule Zwickau. CISC and RISC. (January 28, 2018). Retrieved from Electronics Compendium: https://www.elektronik-kompendium.de/sites/com/0412281.htm Containersvs. virtual machines. (December 13, 2017). Retrieved from NetApp Blog: https://blog.netapp.com/blogs/containers-vs-vms/ Coulouris, G.,Dollimore, J., Kindberg, T., \u0026 Blair, G. (2012). Distributed systems - concepts and design. Boston: Pearson. Dennis, A. K. (2013). Raspberry Pi super cluster. Birmingham: Packt Publishing. The science behind SETI@home. (January 30, 2018). Retrieved from SETI@home: https://setiathome.berkeley.edu/sah_about.php Docker on the Raspberry Pi with HypriotOS. (January 24, 2018). Retrieved from Raspberry Pi Geek: http://www.raspberry-pi-geek.de/Magazin/2017/12/Docker-auf-dem-Raspberry-Pi-mit-HypriotOS Eder,M. (2016). Hypervisor- vs. container-based virtualization. Munich: Technical University of Munich. Einstein@Home on Android devices. (January 23, 2018). Retrieved from GEO600: http://www.geo600.org/1282133/Einstein_Home_on_Android_devices Enable I2C Interface on the Raspberry Pi. (January 28, 2018). Retrieved from Raspberry Pi Spy: https://www.raspberrypi-spy.co.uk/2014/11/enabling-the-i2c-interface-on-the-raspberry-pi/ Encyclopedia - VAX. (January 20, 2018). Retrieved from PCmag: https://www.pcmag.com/encyclopedia/term/53678/vax Failover Cluster. (January 20, 2018). Retrieved from Microsoft Developer Network: https://msdn.microsoft.com/en-us/library/ff650328.aspx Fenner, P. (10. 12 2017). So What's a Practical Laser-Cut Clip Size? Retrieved from DefProc Engineering: https://www.deferredprocrastination.co.uk/blog/2013/so-whats-a-practical-laser-cut-clip-size/Fey, D. (2010). Grid computing - An enabling technology for computational science. Heidelberg: Springer. GitHub - flash. (January 24, 2018). Retrieved from hypriot / flash: https://github.com/hypriot/flash Goasguen, S. (2015). Docker Cookbook - Solutions and Examples for Building Dsitributed Applications. Sebastopol: O'Reilly. Grabsch, V., \u0026 Radunz, Y. (2008). Seminar presentation - Amdahl's and Gustafson's law. o.O.: Creative Commons. Herminghaus, V., \u0026 Scriba, A. (2006). Veritas Storage Foundation - High End Computing for UNIX Design and Implementation of High Availability Solutions with VxVM and VCS. Heidelberg: Springer. Hori","date":"Friday, Jul 23, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/:3:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (4/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/"},{"categories":["Projects"],"content":"Appendix A - Script: Installing Kubernetes ","date":"Friday, Jul 23, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/:4:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (4/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/"},{"categories":["Projects"],"content":"Appendix B - Script: LCD display ","date":"Friday, Jul 23, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/:5:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (4/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/"},{"categories":["Projects"],"content":"Appendix C - Script: Docker Autoscaling Dockerfile: FROM php:5-apache ADD index.php /var/www/html/index.php RUN chmod a+rx index.php index.php: \u003c?php $x = 0.0001; for ($i = 0; $i \u003c = 1000000; $i++) { $x += sqrt(\\$x); } echo \"OK!\"; ?\u003e Cf. (Horizontal Pod Autoscaling, 2018). ↩︎ Cf. (Baier, 2017, p. 117). ↩︎ Cf. (Raspberry Pi 3: Power consumption and CoreMark comparison, 2018). . ↩︎ Cf. (The Science Behind SETI@home, 2018) ; Cf. (BOINC - Active Projects Statistics, 2018) ↩︎ Cf. (Project to setup Boinc client in Docker for the RaspberryPi, 2018). ↩︎ ","date":"Friday, Jul 23, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/:6:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (4/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-4/"},{"categories":["Projects"],"content":"The goal of this work is to create an affordable, energy efficient and portable mini-supercomputer. Ideally, a cluster computer with little or no carbon footprint, individual elements that are inexpensive to replace, and a portable system that can be easily disassembled and reassembled. ","date":"Wednesday, Jun 2, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/:0:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (3/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/"},{"categories":["Projects"],"content":"Construction The practical part of the work, the construction, follows the step-by-step explanation of how to assemble, install and configure the Raspberry Pi cluster. The installation instructions and scripts used in the following are either available from the sources mentioned or from the GitHub repository http://github.com/segraef/PiCube. All installation and configuration steps are easy to follow, so that it is easy to build the cluster on your own. ","date":"Wednesday, Jun 2, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/:1:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (3/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/"},{"categories":["Projects"],"content":"Mounting and wiring Elastic clips (see Simplicity) allow the individual case sides to be easily plugged together without the use of screws. The pre-milled holes for the switch, USB charger and LCD display allow the individual components to be attached using the plastic nuts and screws. Pre-milled holes for the LCD display, network ports, HDMI port, and power connector provide the proper slots for the cables to be subsequently connected outside of the case. The assembly is done according to the steps mentioned below: Connecting the single board computers together using hex spacers. Screw on the attached Raspberry Pis on the left inner side. Attachment of the switch to the base plate. Attaching the LCD display and USB sockets to the front panel. Screwing the USB charger to the right side plate. Mounting the case fan, HDMI and network jacks. After the entire interior has been assembled, the side panels are clipped in step by step. To do this, start with the base plate, onto which the left and right side plates are clipped and clipped together. Before assembling the front panel, we wire the individual components. LCD displays only work by being powered and controlled. A total of 4 cable jumpers are used for this purpose. Two of them are for power and the other two are for control and transmission signals which we connect to GPIO pins 2, 3, 5 and 6 of one of the Raspberry Pis which acts as a master node. All mini computers are connected to the switch using RJ45 network cables. Likewise, we connect the cables of the network jacks to the switch, which serve as network port extensions. Micro-USB cables are typically used for data transfer, in this scenario they are only used for powering the individual single-board computers. We connect the network switch and all Raspberry Pis to the USB charger using the USB cables. We connect the case fan with its 3-pin connector to the GPIO ports 2 and 6 of one of the Raspberry Pis to supply it with power. After all the necessary cables and components are connected, we move on to the cluster installation. The following two images show the PiCube in a partially and fully wired state. Source: Own representation. Figure 24: Partially wired PiCube without front and bottom panels. Source: Own representation. Figure 25: Fully wired and closed PiCube. ","date":"Wednesday, Jun 2, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/:1:1","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (3/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/"},{"categories":["Projects"],"content":"Installation Operating system To provide a suitable developer system and, to meet all requirements, we use the HypriotOS operating system as the base system for the cluster. The OS provides a Docker-optimized Linux kernel and is therefore ideally suited for this cluster. It already includes all the required modules such as Docker Engine, Docker Machine and Docker Compose. HypriotOS is an operating system specially developed for Raspbery Pi, which is based on the Linux distribution Debian. Important prerequisites for the successful operation of a cluster system are identical and redundantly designed hardware (see Cluster) and the cluster software that can run on it. Among other things, this concerns the same software and driver versions for all participating nodes. To ensure that the cluster system maintains a homogeneous operating system and version structure on all nodes, we use operating system images. [^56] Imaging and provisioning For the installation of Raspberry Pi operating systems, memory images are used. Images are disk images in a compressed file that contains files, file system structures, and boot sectors. Simply put, an image contains an exact disk copy of an operating system. The advantage of using images is that an operating system does not have to be installed. In order for this exact disk copy to be deployed in our nodes, we copy the contents to SD cards and deploy them in our cluster nodes. Using the Hypriot flash tool, the hostname is passed to the image and is thus hardcoded for the first boot of the Raspberry Pi. Hardcoded means that values, such as the hostname in this case, are passed into the startup configuration of the operating system and are called and used when the system starts. The following code snippet (see Listing 1) shows the command that sets the hostname rpi1 for the image package hypriotos-rpi1-v1.6.0.img.zip after downloading the HypriotOS image from the address https://github.com/hypriot/image-builder-rpi/releases/download/v1.6.0/. [^57] flash \\--hostname rpi1 https://github.com/hypriot/image-builder-rpi/releases/download/v1.6.0/hypriotos-rpi-v1.6.0.img.zip Listing 1: The HypriotOS image is downloaded and copied to the SD card using flash. YAML Ain't Markup Language (YAML) is a markup language and gives us the possibility to create customized parameters for a startup configuration. Thus, it is very easy to pass values into a configuration which will be initialized and applied when each node is started for the first time. Examples for start parameters, which can be preconfigured: Hostname WLAN -SSID WLAN password Static or dynamic IPaddress The YAML files for our nodes look like this, with the hostname for the respective Raspberry Pi adjusted accordingly: hostname: \"rpi\" wifi: interfaces: wlan0: ssid: \"WLAN\" password: \"2345251344834395\" Listing 2: Example of a YAML configuration. Fixed WLAN parameters lets individual nodes connect to an existing wireless network. We copy this configuration to the image in the /boot/ directory. The following code snippet shows setting the YAML configuration using flash. This accesses /boot/device-init.yaml on initial startup and copies the appropriate parameters to the operating system's startup configuration: flash \\--config https://github.com/hypriot/image-builder-rpi/releases/download/v1.6.0/hypriotos-rpi-v1.6.0.img.zip Listing 3: The HypriotOS image is downloaded, copied to the SD card using flash and a given YAML configuration. Time is precious and, in order to save it, we prepare the respective image for each individual Raspberry Pi and thus take advantage of the automatic provisioning. Due to the use of a total of 5 cluster nodes, the manual installation, configuration and maintenance of each individual node would be very time-consuming. Therefore, we provision every single operating system in advance, which means we use a master image that already contains current driver versions and software packages. We modify this system image for each individua","date":"Wednesday, Jun 2, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/:1:2","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (3/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/"},{"categories":["Projects"],"content":"Cluster configuration Installing and configuring Kubernetes All five nodes are now configured in the same way as described below, the order is not decisive here. The connection to the individual Raspberry Pis is established via Secure Shell (SSH). In order to establish this secure connection over the network, the terminal program putty is used. As soon as the connection is established, a \"sudo update\" is performed to update the current package installation lists to the latest version. Afterwards, \"sudo upgrade\" is used to update all the correspondingly required software packages for Docker. The next step is to install Kubernetes using the following command: curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - \u0026\u0026 echo \"deb http://apt.kubernetes.io/ kubernetes-xenial main\" | sudo tee /etc/apt/sources.list.d/kubernetes.list \u0026\u0026 sudo apt-get update -q \u0026\u0026 sudo apt-get install -qy kubeadm Listing 4: Command to install Kubernetes. Here, the corresponding Kubernetes installation package is downloaded and installed. Now the node rpi1 is selected and the command sudo kubedm init is executed to initialize the cluster. Thus, the cluster is created and rpi1 is set as the master node. All other hosts are added to the cluster as worker nodes using the kubeadm join command. After the last node is successfully added, the command kubectl get nodes is used to check whether all nodes are ready and added to the computer cluster (see Figure 27). Source: Own representation. Figure 27: Status query in the terminal of all nodes. The Kubernetes cluster now has an active master node. If this node ever fails, the kube-controller-manager (see Google Kubernetes) detects this and decides which of the worker nodes will step in as the new active master. Source: Own representation. Figure 28: Schematic structure of Kubernetes on the PiCube cluster. Configuration LCD display In order for the LCD display to show the corresponding values IP addresses, system time, temperature and status information, a corresponding Python script is used. This script is executed automatically at every startup and delivers status values to the LCD display via I2C. [^58] Source: Own representation. Figure 29a: Display of temperature, voltage and load on the LCD display. Source: Own representation. Figure 29b: Display of temperature, voltage and load on the LCD display. ","date":"Wednesday, Jun 2, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/:1:3","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (3/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/"},{"categories":["Projects"],"content":"Evaluation Note Check out the next part Conception, Construction and Evaluation of a Reaspberry Pi Cluster (4/4) ","date":"Wednesday, Jun 2, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/:2:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (3/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/"},{"categories":["Projects"],"content":"Bibliography Adams, J. (September 2017). SFT Guide 14/17 - Raspberry Pi Tips, Tricks \u0026 Hacks (No. 1). Third generation: The innovations of the Raspberry Pi 3, p. 159. Baier, J. (2017). Getting Started with Kubernetes - Harness the power of Kubernetes to manage Docker deployments with ease. Birmingham: Packt Publishing. Bauke, H., \u0026 Mertens, S. (2006). Cluster computing - Practical introduction to high performance computing on Linux clusters. Heidelberg: Springer. Bedner, M. (2012). Cloud computing - technology, security and legal design. Kassel: kassel university press. Bengel, G., Baun, C., Kunze, M., \u0026 Stucky, K.-U. (2008). Master course parallel and distributed systems - fundamentals and programming of multicore processors, multiprocessors, clusters and grids. Wiesbaden: Vieweg+Teubner. Beowulf Cluster Computing. (January 12, 2018). Retrieved from MichiganTech - Beowulf Cluster Computing: http://www.cs.mtu.edu/beowulf/ Beowulf Scalable Mass Storage (T-Racks). (January 12, 2018). Retrieved from ESS Project: https://www.hq.nasa.gov/hpcc/reports/annrpt97/accomps/ess/WW80.html BOINC - Active Projects Statistics. (January 31, 2018). Retrieved from Free-DC - Distributed Computing Stats System: http://stats.free-dc.org/stats.php?page=userbycpid\u0026cpid=cfbdd0ffc5596f8c5fed01bbe619679d Cache.(January 14, 2018). Retrieved from Electronics Compendium: https://www.elektronik-kompendium.de/sites/com/0309291.htm Christl, D.,Riedel, M., \u0026 Zelend, M. (2007). Communication systems / computer networks - Research of tools for the control of a massively parallel cluster computer in the computer center of the West Saxon University of Applied Sciences Zwickau. Zwickau: Westsächsichen Hochschule Zwickau. CISC and RISC. (January 28, 2018). Retrieved from Electronics Compendium: https://www.elektronik-kompendium.de/sites/com/0412281.htm Containersvs. virtual machines. (December 13, 2017). Retrieved from NetApp Blog: https://blog.netapp.com/blogs/containers-vs-vms/ Coulouris, G.,Dollimore, J., Kindberg, T., \u0026 Blair, G. (2012). Distributed systems - concepts and design. Boston: Pearson. Dennis, A. K. (2013). Raspberry Pi super cluster. Birmingham: Packt Publishing. The science behind SETI@home. (January 30, 2018). Retrieved from SETI@home: https://setiathome.berkeley.edu/sah_about.php Docker on the Raspberry Pi with HypriotOS. (January 24, 2018). Retrieved from Raspberry Pi Geek: http://www.raspberry-pi-geek.de/Magazin/2017/12/Docker-auf-dem-Raspberry-Pi-mit-HypriotOS Eder,M. (2016). Hypervisor- vs. container-based virtualization. Munich: Technical University of Munich. Einstein@Home on Android devices. (January 23, 2018). Retrieved from GEO600: http://www.geo600.org/1282133/Einstein_Home_on_Android_devices Enable I2C Interface on the Raspberry Pi. (January 28, 2018). Retrieved from Raspberry Pi Spy: https://www.raspberrypi-spy.co.uk/2014/11/enabling-the-i2c-interface-on-the-raspberry-pi/ Encyclopedia - VAX. (January 20, 2018). Retrieved from PCmag: https://www.pcmag.com/encyclopedia/term/53678/vax Failover Cluster. (January 20, 2018). Retrieved from Microsoft Developer Network: https://msdn.microsoft.com/en-us/library/ff650328.aspx Fenner, P. (10. 12 2017). So What's a Practical Laser-Cut Clip Size? Retrieved from DefProc Engineering: https://www.deferredprocrastination.co.uk/blog/2013/so-whats-a-practical-laser-cut-clip-size/Fey, D. (2010). Grid computing - An enabling technology for computational science. Heidelberg: Springer. GitHub - flash. (January 24, 2018). Retrieved from hypriot / flash: https://github.com/hypriot/flash Goasguen, S. (2015). Docker Cookbook - Solutions and Examples for Building Dsitributed Applications. Sebastopol: O'Reilly. Grabsch, V., \u0026 Radunz, Y. (2008). Seminar presentation - Amdahl's and Gustafson's law. o.O.: Creative Commons. Herminghaus, V., \u0026 Scriba, A. (2006). Veritas Storage Foundation - High End Computing for UNIX Design and Implementation of High Availability Solutions with VxVM and VCS. Heidelberg: Springer. Hori","date":"Wednesday, Jun 2, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/:3:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (3/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-3/"},{"categories":["Projects"],"content":"The goal of this work is to create an affordable, energy efficient and portable mini-supercomputer. Ideally, a cluster computer with little or no carbon footprint, individual elements that are inexpensive to replace, and a portable system that can be easily disassembled and reassembled. ","date":"Tuesday, May 11, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/:0:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (2/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/"},{"categories":["Projects"],"content":"Conception and design In this chapter, we will go into the basic concept of creating a cluster using Raspberry Pi single-board computers. Here, these mini-computers form the basis for constructing a cost-effective and energy-efficient cluster system. Inspired by Joshua Kiepert and his 32-node RPiCluster or Nick Smith and his first design of a 5-node Raspberry Pi cluster, the idea of further developing and improving certain components has emerged, such as increasing the cooling performance by means of an optimized airflow supply, adding and logically arranging further connection possibilities and considering a modular expandability of the cluster. [^52] ","date":"Tuesday, May 11, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/:1:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (2/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/"},{"categories":["Projects"],"content":"Requirements The main focus in the design of this Raspberry Pi cluster is on the following requirement criteria: Cost Efficiency. Energy efficiency. Scalability. Resilience. Further criteria are a visual and easy-to-read status and information display of current system values, ideal cooling and optimization of the airflow for the best possible removal of heat. Furthermore, the entire design concept is fundamentally based on certain design requirements, which we will discuss in more detail in the design decisions (see Design decisions). Cost and energy efficiency The factors of cost and energy efficiency are paramount and predominantly influence the conception and design. In order to keep the acquisition costs as low as possible but still be able to offer efficient computing power, Raspberry Pi single-board computers in version 3 are to be installed. The use of unnecessary cable lengths or heavy and expensive materials such as sheet steel or aluminum for the housing should be avoided. In addition, a weight and further cost saving is to be achieved by using plastic instead of steel screws. The system should be reproducible at low cost and consume as little power as possible. Energy consumption of less than 60 kilowatts per hour is planned, which is comparatively equivalent to the average power consumption of a commercially available light source such as an incandescent lamp. Such low power consumption implies the portability factor, which means that it should also be possible to use this cluster on a mobile basis. Scaling and resilience Scaling is to be considered in this system in two respects. On the one hand, the user should be given the option of connecting the entire system with other clusters in order to be able to scale cluster-wise at this level. Here we also speak of horizontal scaling. On the other hand, it should be possible to scale vertically or node by node within a cluster by adding or removing individual nodes. This is done either automatically using cluster software or by physically adding or removing further single-board computers. Due to the modular structure of the cluster, the primary goal is to expand individual entire clusters, i.e. vertical scaling. In parallel to scaling, failover is an important player when it comes to keeping the cluster alive. As soon as the cluster system scales on the software side, all peripheral components must be designed and optimized accordingly so as not to reach their physical limits, such as maximum storage capacity or computing power. This also applies to components such as the network distributor and the power supply unit. With the help of organizational measures and the creation of technical redundancies, this fail-safety is to be guaranteed. This is also referred to as system availability. Status and information display For a direct perception of current system values such as host name, system time, processor and case temperature, a visual information display should be available in the form of a display. These important and system-critical values should be immediately and directly readable without the help of technical means, such as a keyboard or the connection of an external monitor. Furthermore, this display should have a backlight to be readable even in dark rooms or with little to no light. Cooling Passive cooling and optimized case ventilation are important points that have to be considered in the design. The advantage of passive cooling should be used to save energy on the one hand and costs on the other. Active cooling is nevertheless necessary to ensure the supply of cold air and the removal of warm air. Likewise, a basic law of physics, the so-called chimney effect, must not be lost sight of during further planning. This states: warm air rises, cold air remains on the ground. Air flow and heat dissipation The processors of the individual nodes and integrated circuit modules (IC) or internal circuit modules of the individual peripheral components develop a certa","date":"Tuesday, May 11, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/:1:1","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (2/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/"},{"categories":["Projects"],"content":"Design decisions The basic characteristics of this Raspberry Pi cluster are minimalism, transparency and simplicity. Minimalism in architecture, be it in building or model construction, is characterized by the reduction to simple cubic forms. The goal is the formation of geometric and pure forms, which is made possible with the help of transparent building materials such as glass. Whereupon we come to the property transparency. On the software side, transparency means that the user of this system knows exactly how the cluster system works, how it scales or what software is used. On the hardware side, it is clear which physical components such as cables, network distributors or connectors are connected to each other or whether systems emit visual signals. Simplicity, on the other hand, stands for easy understanding of the system. It should, at first glance, imply the usefulness of this system, from the point of view of an ignorant as well as affine user. Minimalism and transparency: cube We decide on a compact and portable design in the form of a cube and christen the system with the name PiCube. The Pi, as in Rasperry Pi, stands for Python Interpreter and signals that this system supports the Python programming language or is generally supported by all common operating systems like Raspbian or CoreOS. Python convinces with its minimalistic and easily understandable programming style and also contributes to the overall concept of this system. The English word Cube means cube. Another idea for the naming is PiKube, where Kube stands for the cluster management system used and at the same time comes from Kubus, the ancient Greek kybos or the Latin cubus for cube. However, since the design of the cube case does not have exactly equal faces, we decide to use PiCube, since a cube is a regular hexahedron. A hexahedron has six faces of equal size. Simplicity: Elastic Clip Concept In order to allow easy assembly and reassembly, without the use of additional tools or fastening screws of any kind, the Elastic Clip concept by Patrick Fenner was used. This concept allows to create a slim and elegant design. For this purpose, we use acrylic glass, which is transparent, light and flexible, as the construction material for the housing. The clips give the possibility to connect the single acrylic glass sides of the cube in a 90 degree angle. The real highlight here is the automatic snap-in of the clip connections in the insertion openings provided for this purpose. Removal or dismantling of the cube walls is done by bending the clip, so that the connections can be released. [^53] Source: Own representation based on (Laser-Cut Elastic-Clips, 2017) Figure 19: Illustration of clip dimensions with and without force application. $F$ stands for the force applied to the clip. The clip dimensions are: $a$ = 4mm, $b$ = 2m, $d$ = 2mm, $l$ = 25mm Depending on the nature, flexibility and material of the acrylic glass used, the clip may break if too much force is applied. The problem is that the maximum force is applied to the upper right $F$ on the upper right end of the clip, which will break if it is too short. $l$ the clip breaks if it is bent too much or subjected to too much load. To counteract this, there is a simple way to distribute the load on the clip at maximum force by widening the cut. This reduces the risk of the clip breaking. [^54] Source: Own representation based on (Laser-Cut Elastic-Clips, 2017) Figure 20: Widening the incision site increases durability. Due to the material nature and flexibility of acrylic glass, the use of this elastic clip concept is most suitable. If other materials such as medium-density fiberboard (MDF) or simple wood are considered, it must be noted here that due to the unidirectional or one-sided wood fibers, much weaker resistance and thus less flexibility is offered to bend the clip elastically. Use in conjunction with MDF is an alternative, but the clip will inevitably break under too much load. ","date":"Tuesday, May 11, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/:1:2","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (2/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/"},{"categories":["Projects"],"content":"PiCube 2D model and logo Inkscape is a professional software for editing two-dimensional (2D) vector graphics. With the help of this software we create a 2D graphic of the housing plates. Prior to manufacturing the enclosure, we sketch and produce a two-dimensional template to determine the exact dimensions for connections, fasteners, and required air slots. Likewise, we are able to determine the exact cutting dimensions and positioning for the elastic clips (see Design decisions) to the millimeter. [^55] In the following figure, all six sides are shown and marked accordingly. L for left side, R for right side, F for front, T for top, G for bottom, and B for back. The back contains the connectors for HDMI, network and power, as well as the ventilation grilles and mounting holes for the case fan. The front has inlets for the double USB socket and the LCD display. Ventilation grilles are also broadly sketched here. All other sides, except the top, also have mounting holes for the rest of the components like the switch, USB charger and the additional side air intakes. Source: Own representation. Figure 21: 2D model design of the housing including all connections. 3D model and connections To ensure that all the required connections (see Connections) can be correctly attached to the housing, we use the Rhinoceros 3D program to create three-dimensional graphics. A 3D model helps to better visualize and represent the housing to be constructed. Physical components can be inserted, rotated or scaled in size. Due to the exact specification of the dimensions of the individual components, a correspondingly realistic model is created before it is manufactured. The rendered graphics (see Figures 22 and 23) show the overall design with all the individual components installed, marked in different colors for better representation. Blue marks the USB components and light gray the case fan. The color dark gray represents network components and red the HDMI port. Source: Own representation. Figure 22: 3D model design including components with a view of the front of the housing. Source: Own representation. Figure 23: *3D model design including components with view of the back of the housing.*y Components and costs For the construction of this prototype, the individual components are obtained from various suppliers or mail-order companies such as Amazon and eBay. The acrylic glass plates are milled in collaboration with Andreas Gregori from MAD Modelle Architektur Design. For the production of the case we do not use laser cutting, but normal milling. This has the advantage that we avoid traces of smoke or sclerosis, which occur during laser cutting due to high temperatures. Milling has another advantage and that is the possibility of surface milling. Thus, we are able to engrave the PiCube logo (see Figures 21 and 24) into the upper and front cube surface. The five Raspberry Pis and the matching 8 GB SD cards make up the largest part of the costs, amounting to 246.75 EUR. The middle part of the costs, which ranges between 13.75 and 22.99 EUR, is taken up by the Gigabit switch, the USB charger and the USB cables. The rest of the hardware, such as the LCD display, cables and plastic screws, are in the price range of 0.65 to 9.80 EUR. Unit(s) Component Unit price 5 Raspberry Pi 3 Model B 42.70 Euro 5 SanDisk Ultra 8 GB microSDHCUHS -I Class 10 6.65 Euro 1 Edimax ES-5800G V2 Gigabit Switch (8-Port) 22.99 Euro 1 Anear 60 Watt USB Charger (6-Port) 18.99 Euro 5 Micro USB cable (15 cm) 2.75 Euro 5 Transparent power cord (15 cm) 0.79 Euro 2 RJ45 jack (female-male) 2.74 Euro 1 Dual USB 2.0-A female-male connector 4.53 Euro 1 LCD display module 1602 HD44780 with TWI controller 4.45 Euro 1 AC 250V 2.5A IEC320 C7 socket 1.39 Euro 1 C7 power cable 90 degree angled (1 meter) 3.26 Euro 1 Cable jumpers (female-female. 40 wires. 20 cm) 0.65 Euro 56 M3 Nylon Hex Spacer Nuts and Bolts White 0.05 Euro 1 Antec TRICOOL 92mm 4-pin case fan 9.80 Euro 1 Milling cut of the a","date":"Tuesday, May 11, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/:1:3","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (2/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/"},{"categories":["Projects"],"content":"Construction Note Check out the next part Conception, Construction and Evaluation of a Reaspberry Pi Cluster (3/4) ","date":"Tuesday, May 11, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/:2:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (2/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/"},{"categories":["Projects"],"content":"Bibliography Adams, J. (September 2017). SFT Guide 14/17 - Raspberry Pi Tips, Tricks \u0026 Hacks (No. 1). Third generation: The innovations of the Raspberry Pi 3, p. 159. Baier, J. (2017). Getting Started with Kubernetes - Harness the power of Kubernetes to manage Docker deployments with ease. Birmingham: Packt Publishing. Bauke, H., \u0026 Mertens, S. (2006). Cluster computing - Practical introduction to high performance computing on Linux clusters. Heidelberg: Springer. Bedner, M. (2012). Cloud computing - technology, security and legal design. Kassel: kassel university press. Bengel, G., Baun, C., Kunze, M., \u0026 Stucky, K.-U. (2008). Master course parallel and distributed systems - fundamentals and programming of multicore processors, multiprocessors, clusters and grids. Wiesbaden: Vieweg+Teubner. Beowulf Cluster Computing. (January 12, 2018). Retrieved from MichiganTech - Beowulf Cluster Computing: http://www.cs.mtu.edu/beowulf/ Beowulf Scalable Mass Storage (T-Racks). (January 12, 2018). Retrieved from ESS Project: https://www.hq.nasa.gov/hpcc/reports/annrpt97/accomps/ess/WW80.html BOINC - Active Projects Statistics. (January 31, 2018). Retrieved from Free-DC - Distributed Computing Stats System: http://stats.free-dc.org/stats.php?page=userbycpid\u0026cpid=cfbdd0ffc5596f8c5fed01bbe619679d Cache.(January 14, 2018). Retrieved from Electronics Compendium: https://www.elektronik-kompendium.de/sites/com/0309291.htm Christl, D.,Riedel, M., \u0026 Zelend, M. (2007). Communication systems / computer networks - Research of tools for the control of a massively parallel cluster computer in the computer center of the West Saxon University of Applied Sciences Zwickau. Zwickau: Westsächsichen Hochschule Zwickau. CISC and RISC. (January 28, 2018). Retrieved from Electronics Compendium: https://www.elektronik-kompendium.de/sites/com/0412281.htm Containersvs. virtual machines. (December 13, 2017). Retrieved from NetApp Blog: https://blog.netapp.com/blogs/containers-vs-vms/ Coulouris, G.,Dollimore, J., Kindberg, T., \u0026 Blair, G. (2012). Distributed systems - concepts and design. Boston: Pearson. Dennis, A. K. (2013). Raspberry Pi super cluster. Birmingham: Packt Publishing. The science behind SETI@home. (January 30, 2018). Retrieved from SETI@home: https://setiathome.berkeley.edu/sah_about.php Docker on the Raspberry Pi with HypriotOS. (January 24, 2018). Retrieved from Raspberry Pi Geek: http://www.raspberry-pi-geek.de/Magazin/2017/12/Docker-auf-dem-Raspberry-Pi-mit-HypriotOS Eder,M. (2016). Hypervisor- vs. container-based virtualization. Munich: Technical University of Munich. Einstein@Home on Android devices. (January 23, 2018). Retrieved from GEO600: http://www.geo600.org/1282133/Einstein_Home_on_Android_devices Enable I2C Interface on the Raspberry Pi. (January 28, 2018). Retrieved from Raspberry Pi Spy: https://www.raspberrypi-spy.co.uk/2014/11/enabling-the-i2c-interface-on-the-raspberry-pi/ Encyclopedia - VAX. (January 20, 2018). Retrieved from PCmag: https://www.pcmag.com/encyclopedia/term/53678/vax Failover Cluster. (January 20, 2018). Retrieved from Microsoft Developer Network: https://msdn.microsoft.com/en-us/library/ff650328.aspx Fenner, P. (10. 12 2017). So What's a Practical Laser-Cut Clip Size? Retrieved from DefProc Engineering: https://www.deferredprocrastination.co.uk/blog/2013/so-whats-a-practical-laser-cut-clip-size/Fey, D. (2010). Grid computing - An enabling technology for computational science. Heidelberg: Springer. GitHub - flash. (January 24, 2018). Retrieved from hypriot / flash: https://github.com/hypriot/flash Goasguen, S. (2015). Docker Cookbook - Solutions and Examples for Building Dsitributed Applications. Sebastopol: O'Reilly. Grabsch, V., \u0026 Radunz, Y. (2008). Seminar presentation - Amdahl's and Gustafson's law. o.O.: Creative Commons. Herminghaus, V., \u0026 Scriba, A. (2006). Veritas Storage Foundation - High End Computing for UNIX Design and Implementation of High Availability Solutions with VxVM and VCS. Heidelberg: Springer. Hori","date":"Tuesday, May 11, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/:3:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (2/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-2/"},{"categories":["Projects"],"content":"The goal of this work is to create an affordable, energy efficient and portable mini-supercomputer. Ideally, a cluster computer with little or no carbon footprint, individual elements that are inexpensive to replace, and a portable system that can be easily disassembled and reassembled. ","date":"Tuesday, Mar 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/:0:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (1/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/"},{"categories":["Projects"],"content":"Introduction Raspberry Pis are revolutionizing the computer industry. Originally developed to provide low-cost computers to schools, they are expanding far beyond their intended use. This inexpensive technology can be used to accomplish previously unexplored tasks. One such technology is a cluster computer that can run parallel jobs. Many systems built for parallel computing tasks are either expensive or unavailable outside of academia. Supercomputers are expensive to purchase as well as to use, power, and maintain. Although average desktop computers have come down in price, the cost can still become quite high if they require a larger amount of computing power. In today's world of information technology (IT), it is not new to be confronted daily with new technologies such as cloud computing, cluster computing or high-performance computing. All these terms are approaches that are intended to simplify people's work and lives. Cloud providers such as Microsoft or Amazon make these types of technologies available to customers in the form of services for a fee. Users thus have the opportunity to use the computing power of these technologies without having to buy them at a high price. The advantage here is the almost unlimited scaling of resources. Distributed computing, an infrastructure technology and basis for the provision of cluster computers via the Internet, is used to meet increasing resource demands. By interconnecting and adding remote systems, computing power and performance can be dynamically scaled and provided in theoretically unlimited quantities. ","date":"Tuesday, Mar 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/:1:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (1/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/"},{"categories":["Projects"],"content":"Motivation and objective of the work Unlimited computing power implies the solution approach to be able to solve seemingly unsolvable and complex problems, such as the simulation of black holes or the calculation of the Milky Way. Expensive supercomputers are oversized when it comes to testing new applications for high-performance computers or solving complex problems. In this work, we address the question of whether we can create an independent and comparable but also less expensive system with few resources, which allows us to solve complex problems in the same way. 24 years ago, Donald Becker and Thomas Sterling started the Beowulf project to create a low-cost alternative but also a powerful alternative to supercomputers. Based on this example, the idea has arisen to pursue the same principle on a smaller scale. The goal of this work is to create an affordable, energy efficient and portable mini-supercomputer. Ideally, a cluster computer with little or no carbon footprint, individual elements that are inexpensive to replace, and a portable system that can be easily disassembled and reassembled. A Raspberry Pi is ideal for this purpose because of its low price, low power consumption, and small size. At the same time, it still offers decent performance, especially when you consider the computing power offered per watt. ","date":"Tuesday, Mar 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/:2:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (1/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/"},{"categories":["Projects"],"content":"Procedure This paper is divided into 5 parts. The first part is dedicated to the terminological clarification of background information on all technologies involved and how they are related to the cluster. Based on this, the second part clarifies the conceptualization, design, and requirements placed on the system. The main part deals with the construction of the Raspberry Pi cluster and thus forms the practical component, in which design decisions are illustrated and the construction is explained. Following this, important factors such as scalability, performance, cost and energy efficiency are discussed and evaluated. Different use cases are addressed and the technical possibilities are considered. Finally, evaluated evaluations are summarized, limitations are pointed out and future extensions are presented. ","date":"Tuesday, Mar 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/:3:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (1/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/"},{"categories":["Projects"],"content":"Background This chapter covers the basic technologies that form the basis of parallel and distributed systems. These technologies build on each other, as in a layer system, and are dependent on each other. Architectures and techniques based on this, such as virtualization or cluster computing, in turn provide the basic framework for container virtualization and its management systems. ","date":"Tuesday, Mar 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/:4:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (1/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/"},{"categories":["Projects"],"content":"Parallel and distributed systems Until computers evolved into distributed computing systems, there have been fundamental changes in various computing technologies over the past decades, which we will briefly discuss in order to make the basic framework of parallel and distributed systems more understandable. Amdahl's and Gustafson's law At the beginning of this work, the theoretically unlimited increase of computing power was mentioned. Amdahl's law (named in 1967 after Gene Amdahl) deals exactly with this question, namely whether an unlimited increase in speed can be achieved with an increasing number of processors. It describes, how the parallelization of a software affects the acceleration of this. One divides thereby the software into not parallel, thus sequentially executable and parallel executable portions. Sequential parts are process initializations, communication and the memory management. These parts are necessary for the synchronization of the parallel parts. They form dependencies among themselves and are therefore not able to be executed in parallel. Parallel executable parts are the processors, which are used for computation. It is very important to estimate how much performance gain is achieved by adding a certain number of processing units in parallel working system. Sometimes it happens that adding a larger number of computing units does not necessarily improve the performance, because the expected performance tends to decrease or oversaturate if we blindly add more computing resources. Therefore, it is very important to have a rough estimate of the optimized number of resources to use. Suppose a hypothetical system has only one processor with a normalized runtime of $1$. Now we consider how much time the program needs in the parallelizable portion and denote this portion by $P$. The runtime of the sequential part is thus $(1 - P)$. The runtime of the sequential part does not change, but the parallelizable part is optimally distributed to all processors and therefore runs N times as fast. This results in the following runtime formula: [^1] $$\\underset{\\text{sequential}}{\\overset{(1 - P)}{︸}} + \\underset{\\text{parallel}}{\\overset{\\frac{P}{N}}{︸}}$$ This is where the time gain comes from: $$Time\\ gain\\ according\\ to\\ Amdahl$$ $$= \\ \\frac{original\\ duration}{\\text{new\\ duration}}$$ $$= \\frac{1}{(1 - P) + \\frac{P}{N}}$$ Here N is the number of processors and P the runtime portion of the parallelizable program. Gene Amdahl called the time or also speed gain Speedup. With the help of a massive parallelism the time of the parallelizable portion can be reduced arbitrarily, the sequential portion remains unaffected thereby however. As the number of processors increases, the communication overhead between the processors also increases, so that above a certain number, they tend to be busy communicating rather than processing the problem. This reduces the performance and refers to this as an overhead in the task distribution. A program cannot be completely parallelized, so that all processors are always busy with work at the same time. No matter how many processor units are used and the proportion of applications that cannot be parallelized is, for example, one percent, the speedup can be a maximum of 100. Gene Amdahl thus concluded that it makes no sense to keep increasing the computing units in order to generate unlimited computing power. [^2] Amdahl's model remains valid until the total number of computing operations remains the same while the number of computing units continues to increase. However, if in our hypothetical simulation the job size increases while the number of computational units continues to increase, then Gustafson's law must be invoked. John Gustafson established this law in 1988, which states that as long as the problem being addressed is large enough it can be efficiently parallelized. Unlike Amdahl, Gustafson shows that massive parallelization is nevertheless worthwhile. A parallel system cannot ","date":"Tuesday, Mar 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/:4:1","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (1/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/"},{"categories":["Projects"],"content":"Cluster A cluster is a computer network and usually refers to a group of at least 2-n servers, also called nodes. All nodes are connected to each other via a local area network (LAN) and form a logical unit of a supercomputer. Gregory Pfister defines a cluster as follows: [^22] \"A cluster is a type of parallel system that consists of interconnected whole computers and is used as a single, unified resource. “ [^23] Clusters are the approach to achieving high performance, high reliability, or high throughput by using a collection of interconnected computer systems. Depending on the type of setup, either all nodes are active at the same time to increase computing power, or at least 1-n node is passive so that it can stand in for a failed node in case of failure. For data transmission, all servers are connected to each other via at least two network connections to a network switch. Two network connections are typical, since on the one hand the switch is excluded as SPOF, and on the other hand to be able to realize a higher data transfer. [^24] With reference to the current Top 500 list of the world's fastest computers, the term supercomputer is absolutely appropriate. It is clear that several cluster systems are among the ten fastest computers in the world. Computer clusters are used for three different tasks: providing high availability, high performance computing and load balancing. [^25] Shared and distributed storage According to the von Neumann architecture, a computer has a shared memory that contains both computer program instructions and data. Parallel computers are divided into two variants in this respect, systems with shared or distributed memory. Regardless of which variant is used, all processors must always be able to exchange data and instructions with each other. The memory access takes place via a so-called interconnect, a connection network for the transfer of data and commands. The interconnect in clusters is an important component for exchanging and communicating data between management and load distribution processes. [^26] In systems with shared memory, all processors share a memory. The memory is divided into fixed memory modules that form a uniform address space that can be accessed by all processors. The von Neumann bottleneck comes into play here, which means that the interconnect, in this case the data and instruction bus, becomes the bottleneck between memory and processor. Due to the sequential or step-by-step processing of program instructions, only as many actions can be performed as the bus is capable of. As soon as the speed of the bus is significantly lower than the speed of the processors, the processors repeatedly have to wait. In practice, the waiting time is circumvented by the use of buffer memories (cache), which is located between the processor and the memory. This ensures that program commands are available to the processor quickly and directly. [^27] Source: Own representation based on (Bauke \u0026 Mertens, 2006, p. 22); Cf. (Christl, Riedel, \u0026 Zelend, 2007, p. 5) Figure 7: Parallel computers with shared memory connected via the data and instruction bus. Figure 7 shows the representation of memory (M) and processors (P) connected via the interconnect. Computer systems with distributed memory, on the other hand, have a separate memory for each processor. In this case, a connection network is required. As soon as a shared memory is dispensed with, the number of processors can be increased without any problems. By using a distributed memory, each processor gains the upper hand over its address space, since it is allocated its own local memory. Similar to distributed memory, communication takes place via an interconnect, which in this case is a local network. The advantage of computer systems with distributed memory is the increase in the number of processors, but the disadvantage lies in the disproportion between computing and communication performance, since the transport of data between CPU (C","date":"Tuesday, Mar 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/:4:2","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (1/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/"},{"categories":["Projects"],"content":"Virtualization Before we delve a little deeper into the chapter Container Management Systems (see Container Management Systems), we will look at the basic technology of virtualization. This foundational technology is necessary for the operation of container technologies like Docker. In computer science, virtualization refers to the creation of a virtual, rather than actual, version of something, such as an operating system, server, storage device, or network resources. Virtualization refers to a technology in which an application or an entire operating system is abstracted from the actual underlying hardware. In connection with container technologies, a well-known type of virtualization, we distinguish here between two techniques of virtualization, hypervisor-based and container-based virtualization. [^39] Hypervisor-based virtualization A key application of virtualization technology is server virtualization, which uses a software layer called a hypervisor to emulate hardware such as memory, CPU, and networking. The guest OS, which normally interacts with real hardware, implements this with a software emulation of that hardware, and often the guest OS has no idea it is running on virtualized hardware. The hypervisor decides which guest OS gets how much memory, processor time and other resources from the host machine. In this case, the host machine runs the host OS and the hypervisor. This means each OS appears to have direct access to the processor and memory, but the hypervisor actually controls the host processor and resources by allocating what is needed by each OS. While the performance of this virtual machine does not match the performance of the OS running on real hardware, the concept of hypervisor-based virtualization works quite well because most OSes and applications do not require full use of the underlying hardware. This allows for greater flexibility, control and isolation by eliminating dependency on a specific hardware platform. Originally intended for server virtualization, the concept of virtualization has expanded to applications, which is implemented using isolated containers. [^40] Container-based virtualization Container virtualization or container-based virtualization is a method of virtualizing applications but also entire operating systems. Containers in this sense are isolated partitions that are integrated into the kernel of a host machine. In these isolated partitions, multiple instances of applications can run without the need for an OS. This means that the OS is virtualized, while the containers that run inside the system have processes that have their own identity and are thus isolated from another process in another container. Software running in containers communicates directly with the host kernel and must be executable on the operating system and CPU architecture on which the host is running. By not emulating hardware and booting a complete operating system, containers can be started in a few milliseconds and are more efficient than traditional virtual machines. Container images are typically smaller than virtual machine images because container images do not need to contain device drivers or a core to run an operating system. Because of the compactness of these application containers, they find their predominant use in software development. Developers do not have to set up their development machines by hand; instead, they use pre-built container images. These images are memory maps of entire application structures that can be arbitrarily moved back and forth between different host machines, also called shipping. This is one of the reasons why container-based virtualization has become increasingly popular in recent years. [^41] Examples of container platforms are Docker from Docker Inc. and rkt from the developers of the CoreOS operating system, with Docker enjoying increasing popularity in recent years. Compared to virtual machines, Docker represents a simplified solution to virtualization. [","date":"Tuesday, Mar 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/:4:3","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (1/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/"},{"categories":["Projects"],"content":"Container Management Systems As soon as several containers are to be distributed and run on a parallel system such as a cluster, it is necessary to use a container management system. Such a system manages, scales and automates the deployment of application containers. Well-known open source systems are Kubernetes from Google or Docker Swarm. Another very well-known container manager worth mentioning is the OpenShift software from Redhat, but we will not discuss it further in this paper. These cluster management tools play an important role as soon as a cluster has to take care of tasks such as load balancing and scaling. In the following, we will introduce the first two container managers mentioned and explain their structure and use with containers. Docker Swarm Docker is an open standards platform for developing, packaging, and running portable distributed applications. With Docker, developers and system administrators can create, deliver, and run applications on any platform, such as a PC, the cloud, or a virtual machine. Obtaining all the necessary dependencies for a software application, including code, runtime libraries, and system tools and libraries, is often a challenge when developing and running an application. Docker simplifies application development and execution by consolidating all the required software for an application, including dependencies, into a single software unit called a Docker image that can run on any platform and environment. The Docker software, based on the Docker image runs in an isolated environment called a Docker container, which contains its own file system and environment variables. Docker containers are isolated from each other and from the underlying operating system. [^43] One solution already integrated into Docker is Docker Swarm Mode. Docker Swarm is a cluster manager for Docker containers. Swarm allows administrators and developers to set up and manage a cluster consisting of multiple Docker nodes as a single virtual system. Swarm mode is based on Docker Engine, the layer between the operating system and container images. Clustering is an important feature for container technology because it creates a cooperative group of systems that provides redundancy and enables failover when one or more nodes experience a failure. A Docker Swarm Cluster provides users and developers the ability to add or remove containers as compute requirements change. A user or administrator controls a swarm through a swarm manager, which orchestrates and deploys containers. Figure 15 shows the schematic structure and relationship between instances. [^44] Source: Own representation. Figure 15: Structure and communication of the manager and worker instances. The Swarm Manager allows a user to create a primary manager instance and multiple replica instances in case the primary instance fails, similar to an active/passive cluster. In Docker Engine's swarm mode, the user can provision so-called master nodes and worker nodes at runtime. [^45] Google Kubernetes The name Kubernetes comes from the Greek and means helmsman or pilot. Kubernetes is known in specialist circles by the acronym K8s. K8s is an acronym where the eight letters \"ubernete\" are replaced by \"8\". Kubernetes provides a platform for scaling, automatically deploying and managing container applications on distributed machines. It is an orchestration tool and supports container tools such as Apache Mesos, Packer, and including Docker. [^46] A Kubernetes system consists of master and worker nodes, the same system principle as Docker Engine, with manager instances named master. The smallest unit in a Kubernetes cluster is a pod and runs in the worker nodes. This pod consists of at least one or more containers. A worker node can in turn run multiple pods. A pod is a worker process that shares virtual resources such as network and volume among its containers. [^47] Source: Own representation based on (Pods and Nodes, 2018) Figure 16: Granular representatio","date":"Tuesday, Mar 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/:4:4","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (1/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/"},{"categories":["Projects"],"content":"Raspberry Pi Credit-card-sized single-board computers (SBCs) such as the Raspberry Pi, developed in the UK by the Raspberry Pi Foundation, were originally intended to promote computer science education in schools. Acronyms like \"RPi\" or the abbreviation \"RasPi\" or \"Pi\" for the Raspberry Pi are mostly common here. Like smartphones, single-board computers are equipped with ARM processors (Advanced RISC Machines). Before the development of ARM in 1983, there were mainly CISC and RISC processors on the market. CISC stands for Complex Instruction Set Computer. Processors with this architecture are characterized by extremely complex instruction sets. Processors with RISC (Reduced Instruction Set Computer) architectures, on the other hand, have a restricted instruction set and therefore also operate with low power requirements. The Pi's board is equipped with a system-on-a-chip (SoC, i.e. single-chip system), which has the identifier BCM2837 from Broadcom. The SoC consists of a 1.2 GHz ARM Cortex-A53 Quad Core CPU, a VideoCore IV GPU (Graphics Processing Unit) and 512 MB of RAM. It does not include a built-in hard drive, but uses an SD card for booting and permanent data storage. It has an Ethernet port based on the RJ45 standard for connecting to a network, an HDMI port for connecting to a monitor or TV, USB (Universal Serial Bus) ports for connecting a keyboard and mouse, and a 3.5 mm jack for audio and video output. [^50] Source: Own representation based on (Merkert, 2017, p. 12) Figure 17: Illustration of a Raspberry Pi 3 Model B and its components. An average of 35 EUR is the price one pays for a Raspberry Pi, which makes it economically suitable for use and integration into a cluster system, since the unit costs for individual nodes are low. [^51] General Purpose Input Output (GPIO) is the name for programmable inputs and outputs for general purposes. A Raspberry Pi has a total of 40 GPIO pins, twelve of which are for power supply and 28 of which serve as an interface to other systems in order to communicate with or control them. GPIO pins 3 and 5 (see Figure 18) enable devices such as an LCD display to be addressed by means of an Inter-Integrated Circuit (I2C), a serial data bus. Source: Own representation based on (Raspberry Pi 3 GPIO Pin Chart with Pi, 2018) Figure 18: Illustration of the different GPIO pins of the Raspberry Pi. ","date":"Tuesday, Mar 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/:4:5","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (1/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/"},{"categories":["Projects"],"content":"Conception and design Note Check out the next part Conception, Construction and Evaluation of a Reaspberry Pi Cluster (2/4) ","date":"Tuesday, Mar 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/:5:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (1/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/"},{"categories":["Projects"],"content":"Bibliography Adams, J. (September 2017). SFT Guide 14/17 - Raspberry Pi Tips, Tricks \u0026 Hacks (No. 1). Third generation: The innovations of the Raspberry Pi 3, p. 159. Baier, J. (2017). Getting Started with Kubernetes - Harness the power of Kubernetes to manage Docker deployments with ease. Birmingham: Packt Publishing. Bauke, H., \u0026 Mertens, S. (2006). Cluster computing - Practical introduction to high performance computing on Linux clusters. Heidelberg: Springer. Bedner, M. (2012). Cloud computing - technology, security and legal design. Kassel: kassel university press. Bengel, G., Baun, C., Kunze, M., \u0026 Stucky, K.-U. (2008). Master course parallel and distributed systems - fundamentals and programming of multicore processors, multiprocessors, clusters and grids. Wiesbaden: Vieweg+Teubner. Beowulf Cluster Computing. (January 12, 2018). Retrieved from MichiganTech - Beowulf Cluster Computing: http://www.cs.mtu.edu/beowulf/ Beowulf Scalable Mass Storage (T-Racks). (January 12, 2018). Retrieved from ESS Project: https://www.hq.nasa.gov/hpcc/reports/annrpt97/accomps/ess/WW80.html BOINC - Active Projects Statistics. (January 31, 2018). Retrieved from Free-DC - Distributed Computing Stats System: http://stats.free-dc.org/stats.php?page=userbycpid\u0026cpid=cfbdd0ffc5596f8c5fed01bbe619679d Cache.(January 14, 2018). Retrieved from Electronics Compendium: https://www.elektronik-kompendium.de/sites/com/0309291.htm Christl, D.,Riedel, M., \u0026 Zelend, M. (2007). Communication systems / computer networks - Research of tools for the control of a massively parallel cluster computer in the computer center of the West Saxon University of Applied Sciences Zwickau. Zwickau: Westsächsichen Hochschule Zwickau. CISC and RISC. (January 28, 2018). Retrieved from Electronics Compendium: https://www.elektronik-kompendium.de/sites/com/0412281.htm Containersvs. virtual machines. (December 13, 2017). Retrieved from NetApp Blog: https://blog.netapp.com/blogs/containers-vs-vms/ Coulouris, G.,Dollimore, J., Kindberg, T., \u0026 Blair, G. (2012). Distributed systems - concepts and design. Boston: Pearson. Dennis, A. K. (2013). Raspberry Pi super cluster. Birmingham: Packt Publishing. The science behind SETI@home. (January 30, 2018). Retrieved from SETI@home: https://setiathome.berkeley.edu/sah_about.php Docker on the Raspberry Pi with HypriotOS. (January 24, 2018). Retrieved from Raspberry Pi Geek: http://www.raspberry-pi-geek.de/Magazin/2017/12/Docker-auf-dem-Raspberry-Pi-mit-HypriotOS Eder,M. (2016). Hypervisor- vs. container-based virtualization. Munich: Technical University of Munich. Einstein@Home on Android devices. (January 23, 2018). Retrieved from GEO600: http://www.geo600.org/1282133/Einstein_Home_on_Android_devices Enable I2C Interface on the Raspberry Pi. (January 28, 2018). Retrieved from Raspberry Pi Spy: https://www.raspberrypi-spy.co.uk/2014/11/enabling-the-i2c-interface-on-the-raspberry-pi/ Encyclopedia - VAX. (January 20, 2018). Retrieved from PCmag: https://www.pcmag.com/encyclopedia/term/53678/vax Failover Cluster. (January 20, 2018). Retrieved from Microsoft Developer Network: https://msdn.microsoft.com/en-us/library/ff650328.aspx Fenner, P. (10. 12 2017). So What's a Practical Laser-Cut Clip Size? Retrieved from DefProc Engineering: https://www.deferredprocrastination.co.uk/blog/2013/so-whats-a-practical-laser-cut-clip-size/Fey, D. (2010). Grid computing - An enabling technology for computational science. Heidelberg: Springer. GitHub - flash. (January 24, 2018). Retrieved from hypriot / flash: https://github.com/hypriot/flash Goasguen, S. (2015). Docker Cookbook - Solutions and Examples for Building Dsitributed Applications. Sebastopol: O'Reilly. Grabsch, V., \u0026 Radunz, Y. (2008). Seminar presentation - Amdahl's and Gustafson's law. o.O.: Creative Commons. Herminghaus, V., \u0026 Scriba, A. (2006). Veritas Storage Foundation - High End Computing for UNIX Design and Implementation of High Availability Solutions with VxVM and VCS. Heidelberg: Springer. Hori","date":"Tuesday, Mar 9, 2021","objectID":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/:6:0","tags":["Raspberry","Pi","cluster","failover","Kubernetes","k8s","docker","HypriotOS","PiCube","GPIO","YAML","Bash"],"title":"Conception, Construction and Evaluation of a Raspberry Pi Cluster (1/4)","uri":"/conception-construction-and-evaluation-of-a-raspberry-pi-cluster-1/"},{"categories":["TIL","Azure"],"content":"Let’s keep it easy with these one-liner’s you can use for Windows as well as Linux. ","date":"Saturday, Mar 6, 2021","objectID":"/how-to-install-powershell-7/:0:0","tags":["TIL","Azure","PowerShell","posh","pwsh","winget"],"title":"How to install and update Azure PowerShell 7","uri":"/how-to-install-powershell-7/"},{"categories":["TIL","Azure"],"content":"One-liner to install or update PowerShell 7 on Windows 10 # One-liner to install or update PowerShell 7 on Windows 10 iex \"\u0026 { $(irm https://aka.ms/install-powershell.ps1) } -UseMSI\" . . ","date":"Saturday, Mar 6, 2021","objectID":"/how-to-install-powershell-7/:0:1","tags":["TIL","Azure","PowerShell","posh","pwsh","winget"],"title":"How to install and update Azure PowerShell 7","uri":"/how-to-install-powershell-7/"},{"categories":["TIL","Azure"],"content":"One-liner to install or update PowerShell 7 on Linux # One-liner to install or update PowerShell 7 on Linux wget https://aka.ms/install-powershell.sh; sudo bash install-powershell.sh; rm install-powershell.sh . ","date":"Saturday, Mar 6, 2021","objectID":"/how-to-install-powershell-7/:0:2","tags":["TIL","Azure","PowerShell","posh","pwsh","winget"],"title":"How to install and update Azure PowerShell 7","uri":"/how-to-install-powershell-7/"},{"categories":["TIL","Azure"],"content":"Install PowerShell 7 using winget # Install PowerShell 7 using winget winget install PowerShell Tip You can start your PowerShell 7 session with pwsh . References Installing various versions of PowerShell How to install the Azure CLI ","date":"Saturday, Mar 6, 2021","objectID":"/how-to-install-powershell-7/:0:3","tags":["TIL","Azure","PowerShell","posh","pwsh","winget"],"title":"How to install and update Azure PowerShell 7","uri":"/how-to-install-powershell-7/"},{"categories":["TIL","Azure","Scripts"],"content":"Let’s keep it easy with these one-liner’s you can use for Windows as well as Linux to install or update Azure CLI. ","date":"Saturday, Mar 6, 2021","objectID":"/how-to-install-azure-cli/:0:0","tags":["TIL","CLI","Azure","az cli","Bash"],"title":"How to install Azure CLI","uri":"/how-to-install-azure-cli/"},{"categories":["TIL","Azure","Scripts"],"content":"One-liner to install or update Azure CLI on Windows 10 # One-liner to install or update Azure CLI on Windows 10 iwr https://aka.ms/installazurecliwindows -OutFile .\\AzureCLI.msi; start msiexec.exe -Wait -ArgumentList '/I AzureCLI.msi /quiet'; rm .\\AzureCLI.msi . Following PowerShell Cmdlet aliases were used *iwr = Invoke-WebRequest *start = Start-Process *rm = Invoke-WebRequest For more details on aliases see How to get a list of PowerShell Aliases Tip You can verify your Azure CLI version with az --version . ","date":"Saturday, Mar 6, 2021","objectID":"/how-to-install-azure-cli/:0:1","tags":["TIL","CLI","Azure","az cli","Bash"],"title":"How to install Azure CLI","uri":"/how-to-install-azure-cli/"},{"categories":["TIL","Azure","Scripts"],"content":"One-liner to install or update Azure CLI on Linux # One-liner to install or update Azure CLI on Linux curl -L https://aka.ms/InstallAzureCli | bash . Tip The script can also be downloaded and run locally. You may have to restart your shell in order for changes to take effect. References Installing various versions of PowerShell How to install the Azure CLI Install the Azure CLI on Linux ","date":"Saturday, Mar 6, 2021","objectID":"/how-to-install-azure-cli/:0:2","tags":["TIL","CLI","Azure","az cli","Bash"],"title":"How to install Azure CLI","uri":"/how-to-install-azure-cli/"},{"categories":["documentation"],"content":"This article shows the basic Markdown syntax and format.","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"This article offers a sample of basic Markdown syntax that can be used in Hugo content files. Note This article is a shameful copy of the great Grav original page. Let’s face it: Writing content for the Web is tiresome. WYSIWYG editors help alleviate this task, but they generally result in horrible code, or worse yet, ugly web pages. Markdown is a better way to write HTML, without all the complexities and ugliness that usually accompanies it. Some of the key benefits are: Markdown is simple to learn, with minimal extra characters, so it’s also quicker to write content. Less chance of errors when writing in Markdown. Produces valid XHTML output. Keeps the content and the visual display separate, so you cannot mess up the look of your site. Write in any text editor or Markdown application you like. Markdown is a joy to use! John Gruber, the author of Markdown, puts it like this: The overriding design goal for Markdown’s formatting syntax is to make it as readable as possible. The idea is that a Markdown-formatted document should be publishable as-is, as plain text, without looking like it’s been marked up with tags or formatting instructions. While Markdown’s syntax has been influenced by several existing text-to-HTML filters, the single biggest source of inspiration for Markdown’s syntax is the format of plain text email. – John Gruber Without further delay, let us go over the main elements of Markdown and what the resulting HTML looks like! Tip Bookmark this page for easy future reference! ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:0:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"1 Headings Headings from h2 through h6 are constructed with a # for each level: ## h2 Heading ### h3 Heading #### h4 Heading ##### h5 Heading ###### h6 Heading The HTML looks like this: \u003ch2\u003eh2 Heading\u003c/h2\u003e \u003ch3\u003eh3 Heading\u003c/h3\u003e \u003ch4\u003eh4 Heading\u003c/h4\u003e \u003ch5\u003eh5 Heading\u003c/h5\u003e \u003ch6\u003eh6 Heading\u003c/h6\u003e Heading IDs To add a custom heading ID, enclose the custom ID in curly braces on the same line as the heading: ### A Great Heading {#custom-id} The HTML looks like this: \u003ch3 id=\"custom-id\"\u003eA Great Heading\u003c/h3\u003e ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:1:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"2 Comments Comments should be HTML compatible. \u003c!-- This is a comment --\u003e Comment below should NOT be seen: ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:2:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"3 Horizontal Rules The HTML \u003chr\u003e element is for creating a “thematic break” between paragraph-level elements. In Markdown, you can create a \u003chr\u003e with any of the following: ___: three consecutive underscores ---: three consecutive dashes ***: three consecutive asterisks The rendered output looks like this: ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:3:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"4 Body Copy Body copy written as normal, plain text will be wrapped with \u003cp\u003e\u003c/p\u003e tags in the rendered HTML. So this body copy: Lorem ipsum dolor sit amet, graecis denique ei vel, at duo primis mandamus. Et legere ocurreret pri, animal tacimates complectitur ad cum. Cu eum inermis inimicus efficiendi. Labore officiis his ex, soluta officiis concludaturque ei qui, vide sensibus vim ad. The HTML looks like this: \u003cp\u003eLorem ipsum dolor sit amet, graecis denique ei vel, at duo primis mandamus. Et legere ocurreret pri, animal tacimates complectitur ad cum. Cu eum inermis inimicus efficiendi. Labore officiis his ex, soluta officiis concludaturque ei qui, vide sensibus vim ad.\u003c/p\u003e A line break can be done with one blank line. ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:4:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"5 Inline HTML If you need a certain HTML tag (with a class) you can simply use HTML: Paragraph in Markdown. \u003cdiv class=\"class\"\u003e This is \u003cb\u003eHTML\u003c/b\u003e \u003c/div\u003e Paragraph in Markdown. ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:5:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"6 Emphasis ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:6:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Bold For emphasizing a snippet of text with a heavier font-weight. The following snippet of text is rendered as bold text. **rendered as bold text** __rendered as bold text__ The HTML looks like this: \u003cstrong\u003erendered as bold text\u003c/strong\u003e ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:6:1","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Italics For emphasizing a snippet of text with italics. The following snippet of text is rendered as italicized text. *rendered as italicized text* _rendered as italicized text_ The HTML looks like this: \u003cem\u003erendered as italicized text\u003c/em\u003e ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:6:2","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Strikethrough In GFMGitHub flavored Markdown you can do strikethroughs. ~~Strike through this text.~~ The rendered output looks like this: Strike through this text. The HTML looks like this: \u003cdel\u003eStrike through this text.\u003c/del\u003e ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:6:3","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Combination Bold, italics, and strikethrough can be used in combination. ***bold and italics*** ~~**strikethrough and bold**~~ ~~*strikethrough and italics*~~ ~~***bold, italics and strikethrough***~~ The rendered output looks like this: bold and italics strikethrough and bold strikethrough and italics bold, italics and strikethrough The HTML looks like this: \u003cem\u003e\u003cstrong\u003ebold and italics\u003c/strong\u003e\u003c/em\u003e \u003cdel\u003e\u003cstrong\u003estrikethrough and bold\u003c/strong\u003e\u003c/del\u003e \u003cdel\u003e\u003cem\u003estrikethrough and italics\u003c/em\u003e\u003c/del\u003e \u003cdel\u003e\u003cem\u003e\u003cstrong\u003ebold, italics and strikethrough\u003c/strong\u003e\u003c/em\u003e\u003c/del\u003e ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:6:4","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"7 Blockquotes For quoting blocks of content from another source within your document. Add \u003e before any text you want to quote: \u003e **Fusion Drive** combines a hard drive with a flash storage (solid-state drive) and presents it as a single logical volume with the space of both drives combined. The rendered output looks like this: Fusion Drive combines a hard drive with a flash storage (solid-state drive) and presents it as a single logical volume with the space of both drives combined. The HTML looks like this: \u003cblockquote\u003e \u003cp\u003e \u003cstrong\u003eFusion Drive\u003c/strong\u003e combines a hard drive with a flash storage (solid-state drive) and presents it as a single logical volume with the space of both drives combined. \u003c/p\u003e \u003c/blockquote\u003e Blockquotes can also be nested: \u003e Donec massa lacus, ultricies a ullamcorper in, fermentum sed augue. Nunc augue augue, aliquam non hendrerit ac, commodo vel nisi. \u003e\u003e Sed adipiscing elit vitae augue consectetur a gravida nunc vehicula. Donec auctor odio non est accumsan facilisis. Aliquam id turpis in dolor tincidunt mollis ac eu diam. The rendered output looks like this: Donec massa lacus, ultricies a ullamcorper in, fermentum sed augue. Nunc augue augue, aliquam non hendrerit ac, commodo vel nisi. Sed adipiscing elit vitae augue consectetur a gravida nunc vehicula. Donec auctor odio non est accumsan facilisis. Aliquam id turpis in dolor tincidunt mollis ac eu diam. ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:7:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"8 Lists ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:8:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Unordered A list of items in which the order of the items does not explicitly matter. You may use any of the following symbols to denote bullets for each list item: * valid bullet - valid bullet + valid bullet For example: * Lorem ipsum dolor sit amet * Consectetur adipiscing elit * Integer molestie lorem at massa * Facilisis in pretium nisl aliquet * Nulla volutpat aliquam velit * Phasellus iaculis neque * Purus sodales ultricies * Vestibulum laoreet porttitor sem * Ac tristique libero volutpat at * Faucibus porta lacus fringilla vel * Aenean sit amet erat nunc * Eget porttitor lorem The rendered output looks like this: Lorem ipsum dolor sit amet Consectetur adipiscing elit Integer molestie lorem at massa Facilisis in pretium nisl aliquet Nulla volutpat aliquam velit Phasellus iaculis neque Purus sodales ultricies Vestibulum laoreet porttitor sem Ac tristique libero volutpat at Faucibus porta lacus fringilla vel Aenean sit amet erat nunc Eget porttitor lorem The HTML looks like this: \u003cul\u003e \u003cli\u003eLorem ipsum dolor sit amet\u003c/li\u003e \u003cli\u003eConsectetur adipiscing elit\u003c/li\u003e \u003cli\u003eInteger molestie lorem at massa\u003c/li\u003e \u003cli\u003eFacilisis in pretium nisl aliquet\u003c/li\u003e \u003cli\u003eNulla volutpat aliquam velit \u003cul\u003e \u003cli\u003ePhasellus iaculis neque\u003c/li\u003e \u003cli\u003ePurus sodales ultricies\u003c/li\u003e \u003cli\u003eVestibulum laoreet porttitor sem\u003c/li\u003e \u003cli\u003eAc tristique libero volutpat at\u003c/li\u003e \u003c/ul\u003e \u003c/li\u003e \u003cli\u003eFaucibus porta lacus fringilla vel\u003c/li\u003e \u003cli\u003eAenean sit amet erat nunc\u003c/li\u003e \u003cli\u003eEget porttitor lorem\u003c/li\u003e \u003c/ul\u003e ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:8:1","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Ordered A list of items in which the order of items does explicitly matter. 1. Lorem ipsum dolor sit amet 2. Consectetur adipiscing elit 3. Integer molestie lorem at massa 4. Facilisis in pretium nisl aliquet 5. Nulla volutpat aliquam velit 6. Faucibus porta lacus fringilla vel 7. Aenean sit amet erat nunc 8. Eget porttitor lorem The rendered output looks like this: Lorem ipsum dolor sit amet Consectetur adipiscing elit Integer molestie lorem at massa Facilisis in pretium nisl aliquet Nulla volutpat aliquam velit Faucibus porta lacus fringilla vel Aenean sit amet erat nunc Eget porttitor lorem The HTML looks like this: \u003col\u003e \u003cli\u003eLorem ipsum dolor sit amet\u003c/li\u003e \u003cli\u003eConsectetur adipiscing elit\u003c/li\u003e \u003cli\u003eInteger molestie lorem at massa\u003c/li\u003e \u003cli\u003eFacilisis in pretium nisl aliquet\u003c/li\u003e \u003cli\u003eNulla volutpat aliquam velit\u003c/li\u003e \u003cli\u003eFaucibus porta lacus fringilla vel\u003c/li\u003e \u003cli\u003eAenean sit amet erat nunc\u003c/li\u003e \u003cli\u003eEget porttitor lorem\u003c/li\u003e \u003c/ol\u003e Tip If you just use 1. for each number, Markdown will automatically number each item. For example: 1. Lorem ipsum dolor sit amet 1. Consectetur adipiscing elit 1. Integer molestie lorem at massa 1. Facilisis in pretium nisl aliquet 1. Nulla volutpat aliquam velit 1. Faucibus porta lacus fringilla vel 1. Aenean sit amet erat nunc 1. Eget porttitor lorem The rendered output looks like this: Lorem ipsum dolor sit amet Consectetur adipiscing elit Integer molestie lorem at massa Facilisis in pretium nisl aliquet Nulla volutpat aliquam velit Faucibus porta lacus fringilla vel Aenean sit amet erat nunc Eget porttitor lorem ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:8:2","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Task Lists Task lists allow you to create a list of items with checkboxes. To create a task list, add dashes (-) and brackets with a space ([ ]) before task list items. To select a checkbox, add an x in between the brackets ([x]). - [x] Write the press release - [ ] Update the website - [ ] Contact the media The rendered output looks like this: Write the press release Update the website Contact the media ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:8:3","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"9 Code ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:9:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Inline Code Wrap inline snippets of code with `. In this example, `\u003csection\u003e\u003c/section\u003e` should be wrapped as **code**. The rendered output looks like this: In this example, \u003csection\u003e\u003c/section\u003e should be wrapped as code. The HTML looks like this: \u003cp\u003e In this example, \u003ccode\u003e\u0026lt;section\u0026gt;\u0026lt;/section\u0026gt;\u003c/code\u003e should be wrapped with \u003cstrong\u003ecode\u003c/strong\u003e. \u003c/p\u003e ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:9:1","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Indented Code Or indent several lines of code by at least four spaces, as in: // Some comments line 1 of code line 2 of code line 3 of code The rendered output looks like this: // Some comments line 1 of code line 2 of code line 3 of code The HTML looks like this: \u003cpre\u003e \u003ccode\u003e // Some comments line 1 of code line 2 of code line 3 of code \u003c/code\u003e \u003c/pre\u003e ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:9:2","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Block Fenced Code Use “fences” ``` to block in multiple lines of code with a language attribute. ```markdown Sample text here... ``` The HTML looks like this: \u003cpre language-html\u003e \u003ccode\u003eSample text here...\u003c/code\u003e \u003c/pre\u003e ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:9:3","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Syntax Highlighting GFMGitHub Flavored Markdown also supports syntax highlighting. To activate it, simply add the file extension of the language you want to use directly after the first code “fence”, ```js, and syntax highlighting will automatically be applied in the rendered HTML. For example, to apply syntax highlighting to JavaScript code: ```js grunt.initConfig({ assemble: { options: { assets: 'docs/assets', data: 'src/data/*.{json,yml}', helpers: 'src/custom-helpers.js', partials: ['src/partials/**/*.{hbs,md}'] }, pages: { options: { layout: 'default.hbs' }, files: { './': ['src/templates/pages/index.hbs'] } } } }; ``` The rendered output looks like this: grunt.initConfig({ assemble: { options: { assets: 'docs/assets', data: 'src/data/*.{json,yml}', helpers: 'src/custom-helpers.js', partials: ['src/partials/**/*.{hbs,md}'] }, pages: { options: { layout: 'default.hbs' }, files: { './': ['src/templates/pages/index.hbs'] } } } }; Note Syntax highlighting page in Hugo Docs introduces more about syntax highlighting, including highlight shortcode. ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:9:4","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"10 Tables Tables are created by adding pipes as dividers between each cell, and by adding a line of dashes (also separated by bars) beneath the header. Note that the pipes do not need to be vertically aligned. | Option | Description | | ------ | ----------- | | data | path to data files to supply the data that will be passed into templates. | | engine | engine to be used for processing templates. Handlebars is the default. | | ext | extension to be used for dest files. | The rendered output looks like this: Option Description data path to data files to supply the data that will be passed into templates. engine engine to be used for processing templates. Handlebars is the default. ext extension to be used for dest files. The HTML looks like this: \u003ctable\u003e \u003cthead\u003e \u003ctr\u003e \u003cth\u003eOption\u003c/th\u003e \u003cth\u003eDescription\u003c/th\u003e \u003c/tr\u003e \u003c/thead\u003e \u003ctbody\u003e \u003ctr\u003e \u003ctd\u003edata\u003c/td\u003e \u003ctd\u003epath to data files to supply the data that will be passed into templates.\u003c/td\u003e \u003c/tr\u003e \u003ctr\u003e \u003ctd\u003eengine\u003c/td\u003e \u003ctd\u003eengine to be used for processing templates. Handlebars is the default.\u003c/td\u003e \u003c/tr\u003e \u003ctr\u003e \u003ctd\u003eext\u003c/td\u003e \u003ctd\u003eextension to be used for dest files.\u003c/td\u003e \u003c/tr\u003e \u003c/tbody\u003e \u003c/table\u003e Right or center aligned text Adding a colon on the right side of the dashes below any heading will right align text for that column. Adding colons on both sides of the dashes below any heading will center align text for that column. | Option | Description | |:------:| -----------:| | data | path to data files to supply the data that will be passed into templates. | | engine | engine to be used for processing templates. Handlebars is the default. | | ext | extension to be used for dest files. | The rendered output looks like this: Option Description data path to data files to supply the data that will be passed into templates. engine engine to be used for processing templates. Handlebars is the default. ext extension to be used for dest files. ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:10:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"11 Links ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:11:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Basic Link \u003chttps://assemble.io\u003e \[email protected]\u003e [Assemble](https://assemble.io) The rendered output looks like this (hover over the link, there is no tooltip): https://assemble.io [email protected] Assemble The HTML looks like this: \u003ca href=\"https://assemble.io\"\u003ehttps://assemble.io\u003c/a\u003e \u003ca href=\"mailto:[email protected]\"\[email protected]\u003c/a\u003e \u003ca href=\"https://assemble.io\"\u003eAssemble\u003c/a\u003e ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:11:1","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Add a Title [Upstage](https://github.com/upstage/ \"Visit Upstage!\") The rendered output looks like this (hover over the link, there should be a tooltip): Upstage The HTML looks like this: \u003ca href=\"https://github.com/upstage/\" title=\"Visit Upstage!\"\u003eUpstage\u003c/a\u003e ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:11:2","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"Named Anchors Named anchors enable you to jump to the specified anchor point on the same page. For example, each of these chapters: ## Table of Contents * [Chapter 1](#chapter-1) * [Chapter 2](#chapter-2) * [Chapter 3](#chapter-3) will jump to these sections: ## Chapter 1 \u003ca id=\"chapter-1\"\u003e\u003c/a\u003e Content for chapter one. ## Chapter 2 \u003ca id=\"chapter-2\"\u003e\u003c/a\u003e Content for chapter one. ## Chapter 3 \u003ca id=\"chapter-3\"\u003e\u003c/a\u003e Content for chapter one. Note The specific placement of the anchor tag seems to be arbitrary. They are placed inline here since it seems to be unobtrusive, and it works. ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:11:3","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"12 Footnotes Footnotes allow you to add notes and references without cluttering the body of the document. When you create a footnote, a superscript number with a link appears where you added the footnote reference. Readers can click the link to jump to the content of the footnote at the bottom of the page. To create a footnote reference, add a caret and an identifier inside brackets ([^1]). Identifiers can be numbers or words, but they can’t contain spaces or tabs. Identifiers only correlate the footnote reference with the footnote itself — in the output, footnotes are numbered sequentially. Add the footnote using another caret and number inside brackets with a colon and text ([^1]: My footnote.). You don’t have to put footnotes at the end of the document. You can put them anywhere except inside other elements like lists, block quotes, and tables. This is a digital footnote[^1]. This is a footnote with \"label\"[^label] [^1]: This is a digital footnote [^label]: This is a footnote with \"label\" This is a digital footnote1. This is a footnote with “label”2 ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:12:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["documentation"],"content":"13 Images Images have a similar syntax to links but include a preceding exclamation point. ![Minion](https://octodex.github.com/images/minion.png\" caption=\".\" \u003e}} ![Minion](https://octodex.github.com/images/minion.png\" caption=\".\" \u003e}} or: ![Alt text](https://octodex.github.com/images/stormtroopocat.jpg \"The Stormtroopocat\") The Stormtroopocat Like links, images also have a footnote style syntax: ![Alt text][id] The Dojocat With a reference later in the document defining the URL location: [id]: https://octodex.github.com/images/dojocat.jpg \"The Dojocat\" Tip LoveIt theme has special shortcode for image, which provides more features. This is a digital footnote ↩︎ This is a footnote with “label” ↩︎ ","date":"Monday, Mar 1, 2021","objectID":"/basic-markdown-syntax/:13:0","tags":["Markdown","HTML"],"title":"Basic Markdown Syntax","uri":"/basic-markdown-syntax/"},{"categories":["Azure","Scripts","DevOps"],"content":"Create self-hosted Azure Pipelines Container Agents for Azure Devops. Pipeline Build Status PipelineAgents ","date":"Monday, Dec 7, 2020","objectID":"/azure-pipelines-container-agents-for-azure-devops/:0:0","tags":["Automation","scripts","Azure","container","ACR","ADO","DevOps","iac","ACI","build","GitHub","pipeline","kubectl","terraform","docker","git","Chocolatey","PowerShell"],"title":"Azure Pipelines Container Agents for Azure Devops","uri":"/azure-pipelines-container-agents-for-azure-devops/"},{"categories":["Azure","Scripts","DevOps"],"content":"Overview The following explains how to easily build, setup and run self-hosted docker container agents using Azure Pipelines in Azure DevOps (ADO). The pipeline does the following for you: Creates an Azure Container Registry (ACR). Builds Docker Container Image for self-hosted Azure Pipelines Agent within that ACR. Starts Docker Container as Azure Container Instances (ACI). Connects ACIs to your Azure DevOps Agent Pool (Self-Hosted). ","date":"Monday, Dec 7, 2020","objectID":"/azure-pipelines-container-agents-for-azure-devops/:1:0","tags":["Automation","scripts","Azure","container","ACR","ADO","DevOps","iac","ACI","build","GitHub","pipeline","kubectl","terraform","docker","git","Chocolatey","PowerShell"],"title":"Azure Pipelines Container Agents for Azure Devops","uri":"/azure-pipelines-container-agents-for-azure-devops/"},{"categories":["Azure","Scripts","DevOps"],"content":"Repository github.com/segraef/apa ","date":"Monday, Dec 7, 2020","objectID":"/azure-pipelines-container-agents-for-azure-devops/:2:0","tags":["Automation","scripts","Azure","container","ACR","ADO","DevOps","iac","ACI","build","GitHub","pipeline","kubectl","terraform","docker","git","Chocolatey","PowerShell"],"title":"Azure Pipelines Container Agents for Azure Devops","uri":"/azure-pipelines-container-agents-for-azure-devops/"},{"categories":["Azure","Scripts","DevOps"],"content":"Requirements Azure DevOps Project Azure DevOps preview feature multi-stage pipelines enabeld Azure Resource Manager Service Connection ","date":"Monday, Dec 7, 2020","objectID":"/azure-pipelines-container-agents-for-azure-devops/:3:0","tags":["Automation","scripts","Azure","container","ACR","ADO","DevOps","iac","ACI","build","GitHub","pipeline","kubectl","terraform","docker","git","Chocolatey","PowerShell"],"title":"Azure Pipelines Container Agents for Azure Devops","uri":"/azure-pipelines-container-agents-for-azure-devops/"},{"categories":["Azure","Scripts","DevOps"],"content":"Setup Create an Azure DevOps Agent pool within your Azure DevOps organization. Generate a Personal Access Token (PAT) for your Azure DevOps Organization. When generating the (PAT), assign the following scopes: Agent Pools - Read \u0026 Manage Deployment Groups - Read \u0026 Manage Create a new repository and clone/fork this repo into it. In Pipelines/Library add a variable group named vg.PipelineAgents, with the following variable to avoid exposing keys \u0026 secrets in code agentPoolToken = \u003cagentPoolToken\u003e # personal acces token for agent pool In parameters.yml adjust following variables acrName = \u003cacrName\u003e # Azure Container Registry Name (needs to be unique) adoUrl = https://dev.azure.com/\u003corganization\u003e # Azure DevOps Organization URL agentPool = \u003cagentPool\u003e # agent-pool name location = \u003clocation\u003e # where your resources will be created resourceGroupName = \u003cresourceGroupName\u003e # where your agents will be placed serviceConnection = \u003cserviceConnection\u003e # arm service connection name Create a new pipeline using the pipeline.yaml and run it. ","date":"Monday, Dec 7, 2020","objectID":"/azure-pipelines-container-agents-for-azure-devops/:4:0","tags":["Automation","scripts","Azure","container","ACR","ADO","DevOps","iac","ACI","build","GitHub","pipeline","kubectl","terraform","docker","git","Chocolatey","PowerShell"],"title":"Azure Pipelines Container Agents for Azure Devops","uri":"/azure-pipelines-container-agents-for-azure-devops/"},{"categories":["Azure","Scripts","DevOps"],"content":"Helper Scripts Instead using an Azure Pipeline you can also run also these tasks locally using your local machine as agent. For that you can find a Helper file here. If you’re not familiar with Docker at all I recommend the Docker Quickstart. ","date":"Monday, Dec 7, 2020","objectID":"/azure-pipelines-container-agents-for-azure-devops/:5:0","tags":["Automation","scripts","Azure","container","ACR","ADO","DevOps","iac","ACI","build","GitHub","pipeline","kubectl","terraform","docker","git","Chocolatey","PowerShell"],"title":"Azure Pipelines Container Agents for Azure Devops","uri":"/azure-pipelines-container-agents-for-azure-devops/"},{"categories":["Azure","Scripts","DevOps"],"content":"Docker Container image contents The docker container images are based on the official Azure Pipelines VM images for Microsoft-hosted CI/CD. ","date":"Monday, Dec 7, 2020","objectID":"/azure-pipelines-container-agents-for-azure-devops/:6:0","tags":["Automation","scripts","Azure","container","ACR","ADO","DevOps","iac","ACI","build","GitHub","pipeline","kubectl","terraform","docker","git","Chocolatey","PowerShell"],"title":"Azure Pipelines Container Agents for Azure Devops","uri":"/azure-pipelines-container-agents-for-azure-devops/"},{"categories":["Azure","Scripts","DevOps"],"content":"Ubuntu / Debian Azure CLI (latest) Git (latest) PowerShell Core (latest) .NET SDK (2.1) Docker (18.06).3-ce Kubectl (1.14.4) Terraform (0.12.6) ","date":"Monday, Dec 7, 2020","objectID":"/azure-pipelines-container-agents-for-azure-devops/:6:1","tags":["Automation","scripts","Azure","container","ACR","ADO","DevOps","iac","ACI","build","GitHub","pipeline","kubectl","terraform","docker","git","Chocolatey","PowerShell"],"title":"Azure Pipelines Container Agents for Azure Devops","uri":"/azure-pipelines-container-agents-for-azure-devops/"},{"categories":["Azure","Scripts","DevOps"],"content":"Windows Server Core (ltsc2019) Chocolatey (latest) Azure CLI (latest) Git (latest) PowerShell Core (latest) Docker (in porgress) Kubectl (in porgress) Terraform (in porgress) References Azure DevOps Project Azure DevOps preview feature multi-stage pipelines enabeld Azure Resource Manager Service Connection Azure DevOps Agent pool Personal Access Token (PAT) Variable Group Docker Quickstart Azure Pipelines VM images for Microsoft-hosted CI/CD GitHub Actions Virtual Environments ","date":"Monday, Dec 7, 2020","objectID":"/azure-pipelines-container-agents-for-azure-devops/:6:2","tags":["Automation","scripts","Azure","container","ACR","ADO","DevOps","iac","ACI","build","GitHub","pipeline","kubectl","terraform","docker","git","Chocolatey","PowerShell"],"title":"Azure Pipelines Container Agents for Azure Devops","uri":"/azure-pipelines-container-agents-for-azure-devops/"},{"categories":["GitHub","Azure"],"content":"All ARM enthusiasts among us will now probably cry out and be happy. Microsoft announced a new ARM DSL, called Bicep. I won’t go into too much detail here, as I’m more into how to use a GitHub Action to use Bicep to generate an ARM template out of a .bice file. But let me give you some context to Bicep. ","date":"Thursday, Sep 3, 2020","objectID":"/github-action-for-project-bicep-arm-dsl/:0:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Github Action for Project Bicep (ARM DSL)","uri":"/github-action-for-project-bicep-arm-dsl/"},{"categories":["GitHub","Azure"],"content":"Wait what? Bicep is a domain-specific language (DSL) for the declarative use of Azure resources. It simplifies the authoring experience with cleaner syntax and better support for modularity and code reuse. Bicep is a transparent abstraction over ARM and ARM templates, which means that everything that can be done in an ARM template can also be done in Bicep (outside the temporarily known limitations). All resource types, apiVersions and properties that are valid in an ARM template are equally valid on the first day in Bicep. Bicep compiles down to standard ARM template JSON files, which means that the ARM JSON is effectively treated as an intermediate language (IL). Source: Azure/bicep ","date":"Thursday, Sep 3, 2020","objectID":"/github-action-for-project-bicep-arm-dsl/:1:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Github Action for Project Bicep (ARM DSL)","uri":"/github-action-for-project-bicep-arm-dsl/"},{"categories":["GitHub","Azure"],"content":"Where do I get started? ","date":"Thursday, Sep 3, 2020","objectID":"/github-action-for-project-bicep-arm-dsl/:2:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Github Action for Project Bicep (ARM DSL)","uri":"/github-action-for-project-bicep-arm-dsl/"},{"categories":["GitHub","Azure"],"content":"GitHub Action Bicep ","date":"Thursday, Sep 3, 2020","objectID":"/github-action-for-project-bicep-arm-dsl/:3:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Github Action for Project Bicep (ARM DSL)","uri":"/github-action-for-project-bicep-arm-dsl/"},{"categories":["GitHub","Azure"],"content":"Procedure A bicep file looks like the following: With the help of the Bicep CLI you can compile a bice file into an ARM template: bicep build ./main.bicep main.json The GitHub Workflow looks like the following: Once your workflow and main.bicep is pushed/commited to your repository the workflow gets executed. After that, the GitHub Action segraef/bicep compiles your main.bicep into the ARM template and is being deployed into Azure using New-AzResourceGroupDeployment. GitHub Workflow execution. Finished example deployment of a storage account in Azure defined in main.bicep. ","date":"Thursday, Sep 3, 2020","objectID":"/github-action-for-project-bicep-arm-dsl/:3:1","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Github Action for Project Bicep (ARM DSL)","uri":"/github-action-for-project-bicep-arm-dsl/"},{"categories":["GitHub","Azure"],"content":"Yeah wow great and now? This GitHub action is not a magic bullet and is just an experiment to play around with Bicep. But, this action has one advantage: The action only needs to be fed with a bicep file and will automatically deploy this bicep file to Azure. References Azure GitHub Actions and Wokflows ","date":"Thursday, Sep 3, 2020","objectID":"/github-action-for-project-bicep-arm-dsl/:4:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Github Action for Project Bicep (ARM DSL)","uri":"/github-action-for-project-bicep-arm-dsl/"},{"categories":["GitHub","Azure"],"content":"Let me give you an introduction how to use the power of GitHub Actions and Workflows to deploy resources into Azure. I’m going to explain you the basics of GitHub Actions, Workflows, runners and how to deploy resources into Azure. At the end of this post you should have understood how GitHub Actions and Workflows work together to give you a kickstart to run your own. ","date":"Thursday, Sep 3, 2020","objectID":"/azure-github-actions-and-workflows/:0:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Azure Github Actions and Workflows","uri":"/azure-github-actions-and-workflows/"},{"categories":["GitHub","Azure"],"content":"Prerequisites The only thing you need is a GitHub Account. If you don’t have an account yet then you should create one here. ","date":"Thursday, Sep 3, 2020","objectID":"/azure-github-actions-and-workflows/:1:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Azure Github Actions and Workflows","uri":"/azure-github-actions-and-workflows/"},{"categories":["GitHub","Azure"],"content":"What is a GitHub Action? A GitHub Action is an individual task that you can combine with other Actions to create jobs and customize your workflow with the help of specific steps. You can either use Actions shared by the GitHub community or create your own ","date":"Thursday, Sep 3, 2020","objectID":"/azure-github-actions-and-workflows/:2:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Azure Github Actions and Workflows","uri":"/azure-github-actions-and-workflows/"},{"categories":["GitHub","Azure"],"content":"Why GitHub Actions? That’s the cool thing about GitHub Actions, it is what the name says, an Action. Usually you have code sitting in your GitHub repository and now you want to get this code running or deployed. Simply said, GitHub Actions help you automate your software development workflows in the same place you store code and collaborate on pull requests and issues without the need of a 3rd party CI/CD platform like Azure DevOps or Jenkins. Another advantage of using Actions is that you can reference other publicly available repositories on GitHub directly. ","date":"Thursday, Sep 3, 2020","objectID":"/azure-github-actions-and-workflows/:3:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Azure Github Actions and Workflows","uri":"/azure-github-actions-and-workflows/"},{"categories":["GitHub","Azure"],"content":"How do GitHub Actions work? Different types of GitHub Actions can be build with Docker container and JavaScript. Actions require a metadata file to define the inputs, outputs and main entrypoint for your action. The metadata filename must be either action.yml or action.yaml. A very important part here is the main entrypoint. The strengths of GitHub Actions is that you can use whatever code you prefer and that’s where your entrypoint comes into play. The GitHub Action executes the entrypoint in your specific language. If you’re good in Python, C#, Go or even Bash, go ahead and use your preferred language. If you plan to combine your own actions, workflows, and code in a single repository, I recommend storing actions in the .github directory (unless you plan to use public Actions). For example, .github/actions/action-a and .github/actions/action-b. ","date":"Thursday, Sep 3, 2020","objectID":"/azure-github-actions-and-workflows/:4:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Azure Github Actions and Workflows","uri":"/azure-github-actions-and-workflows/"},{"categories":["GitHub","Azure"],"content":"Where do GitHub Workflows come into play? You can think of continuous integration (CI) and continuous deployment (CD) directly in your GitHub repository. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub. Compared with Azure Pipelines in Azure DevOps you have pipeline tasks which execute specific steps. GitHub workflows in turn use GitHub Actions to execute these as steps. To use an Action in a workflow, you must include it as a step. GitHub Workflows have to be stored in the .github directory. For example, .github/workflows/workflow-a or .github/workflows/workflow-b. ","date":"Thursday, Sep 3, 2020","objectID":"/azure-github-actions-and-workflows/:5:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Azure Github Actions and Workflows","uri":"/azure-github-actions-and-workflows/"},{"categories":["GitHub","Azure"],"content":"GitHub runners, wait what? In order for your code to be executed, it must run somewhere and this is done on and with so-called GitHub runners. Comparable to hosted pipeline agents in Azure DevOps. Workflows run in Linux, macOS, Windows, and containers on GitHub-hosted runners. Alternatively, you can host your own self-hosted runners. ","date":"Thursday, Sep 3, 2020","objectID":"/azure-github-actions-and-workflows/:6:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Azure Github Actions and Workflows","uri":"/azure-github-actions-and-workflows/"},{"categories":["GitHub","Azure"],"content":"What options do I have to deploy Azure resources with GitHub Actions? You several options to deploy resources into Azure. You can either use publicly available Actions like Azure CLI (azure/cli) action sets up the GitHub Action runner environment with the latest (or any user-specified) version of the Azure CLI. Azure PowerShell (azure/PowerShell) action sets up the GitHub Action runner environment with the latest (or any user-specified) version of the Azure PowerShell module. with which you can then run Azure CLI or Azure PowerShell scripts to create and manage any Azure resource. See sample workflow with public Actions below. Other options are to use or create your own Actions (see segraef/aga). ","date":"Thursday, Sep 3, 2020","objectID":"/azure-github-actions-and-workflows/:7:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Azure Github Actions and Workflows","uri":"/azure-github-actions-and-workflows/"},{"categories":["GitHub","Azure"],"content":"Sample Workflow with public Actions This example workflow uses following publicly available Actions Azure Login (azure/login) Azure CLI (azure/cli) Azure PowerShell (azure/PowerShell) Within your Workflow you have to login to Azure. Before running Azure PowerShell scripts you can make use of the Action Azure Login (azure/login). In order for this action to log into your environment you need Azure service principal credentials. Easily create your own service principal credentials with the help of this az cli snippet az ad sp create-for-rbac --name \"\u003cspName\u003e\" --role contributor --scopes /subscriptions/\u003csubscriptionId\u003e --sdk-auth Once you’ve created your credentials go ahead and add them as secret AZURE_CREDENTIALS to your GitHub repository. ","date":"Thursday, Sep 3, 2020","objectID":"/azure-github-actions-and-workflows/:8:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Azure Github Actions and Workflows","uri":"/azure-github-actions-and-workflows/"},{"categories":["GitHub","Azure"],"content":"Dependencies on other GitHub Actions If you’re looking for a way to use managed identities, you can also use my GitHub action (segraef/azlogin) which I’ve equipped with the ability to use managed identities. Remember, Managed Identities can only be used in conjunction with Self-Hosted Runners. Once login is done, Azure PowerShell action will use the same session to run the script. ","date":"Thursday, Sep 3, 2020","objectID":"/azure-github-actions-and-workflows/:9:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Azure Github Actions and Workflows","uri":"/azure-github-actions-and-workflows/"},{"categories":["GitHub","Azure"],"content":"Sample workflow to run inlinescript using Azure PowerShell The following sample workflow can be found here. Place this workflow file in your in your .github directory, for instance .github/workflows/example.yml. Once you commit and push your code the Workflow starts to run and executes your Actions. Representation of the actual GitHub Workflow run. Congratulations you just created and started your first GitHub Workflow! References Azure GitHub Actions and Wokflows github.com/segraef/aga ","date":"Thursday, Sep 3, 2020","objectID":"/azure-github-actions-and-workflows/:10:0","tags":["automation","Scripts","PowerShell","Bicep","json","github","git","action","workflow","YAML","ARM"],"title":"Azure Github Actions and Workflows","uri":"/azure-github-actions-and-workflows/"},{"categories":["WordPress","Other"],"content":"This article offers a sample of basic Markdown syntax that can be used in Hugo content files, also it shows whether basic HTML elements are decorated with CSS in a Hugo theme. ","date":"Wednesday, Mar 11, 2020","objectID":"/markdown-syntax-guide/:0:0","tags":["WordPress","markdown","syntax"],"title":"Markdown Syntax Guide","uri":"/markdown-syntax-guide/"},{"categories":["WordPress","Other"],"content":"Headings The following HTML \u003ch1\u003e—\u003ch6\u003e elements represent six levels of section headings. \u003ch1\u003e is the highest section level while \u003ch6\u003e is the lowest. H1 ","date":"Wednesday, Mar 11, 2020","objectID":"/markdown-syntax-guide/:1:0","tags":["WordPress","markdown","syntax"],"title":"Markdown Syntax Guide","uri":"/markdown-syntax-guide/"},{"categories":["WordPress","Other"],"content":"H2 ","date":"Wednesday, Mar 11, 2020","objectID":"/markdown-syntax-guide/:2:0","tags":["WordPress","markdown","syntax"],"title":"Markdown Syntax Guide","uri":"/markdown-syntax-guide/"},{"categories":["WordPress","Other"],"content":"H3 H4 H5 H6 ","date":"Wednesday, Mar 11, 2020","objectID":"/markdown-syntax-guide/:2:1","tags":["WordPress","markdown","syntax"],"title":"Markdown Syntax Guide","uri":"/markdown-syntax-guide/"},{"categories":["WordPress","Other"],"content":"Paragraph Xerum, quo qui aut unt expliquam qui dolut labo. Aque venitatiusda cum, voluptionse latur sitiae dolessi aut parist aut dollo enim qui voluptate ma dolestendit peritin re plis aut quas inctum laceat est volestemque commosa as cus endigna tectur, offic to cor sequas etum rerum idem sintibus eiur? Quianimin porecus evelectur, cum que nis nust voloribus ratem aut omnimi, sitatur? Quiatem. Nam, omnis sum am facea corem alique molestrunt et eos evelece arcillit ut aut eos eos nus, sin conecerem erum fuga. Ri oditatquam, ad quibus unda veliamenimin cusam et facea ipsamus es exerum sitate dolores editium rerore eost, temped molorro ratiae volorro te reribus dolorer sperchicium faceata tiustia prat. Itatur? Quiatae cullecum rem ent aut odis in re eossequodi nonsequ idebis ne sapicia is sinveli squiatum, core et que aut hariosam ex eat. ","date":"Wednesday, Mar 11, 2020","objectID":"/markdown-syntax-guide/:3:0","tags":["WordPress","markdown","syntax"],"title":"Markdown Syntax Guide","uri":"/markdown-syntax-guide/"},{"categories":["WordPress","Other"],"content":"Blockquotes The blockquote element represents content that is quoted from another source, optionally with a citation which must be within a footer or cite element, and optionally with in-line changes such as annotations and abbreviations. Blockquote without attribution Tiam, ad mint andaepu dandae nostion secatur sequo quae. Note that you can use Markdown syntax within a blockquote. Blockquote with attribution Don’t communicate by sharing memory, share memory by communicating. — Rob Pike1 ","date":"Wednesday, Mar 11, 2020","objectID":"/markdown-syntax-guide/:4:0","tags":["WordPress","markdown","syntax"],"title":"Markdown Syntax Guide","uri":"/markdown-syntax-guide/"},{"categories":["WordPress","Other"],"content":"Tables Tables aren’t part of the core Markdown spec, but Hugo supports them out-of-the-box. Name Age Bob 27 Alice 23 Inline Markdown within tables Italics Bold Code italics bold code ","date":"Wednesday, Mar 11, 2020","objectID":"/markdown-syntax-guide/:5:0","tags":["WordPress","markdown","syntax"],"title":"Markdown Syntax Guide","uri":"/markdown-syntax-guide/"},{"categories":["WordPress","Other"],"content":"Code Blocks Code block with backticks \u003c!doctype html\u003e \u003chtml lang=\"en\"\u003e \u003chead\u003e \u003cmeta charset=\"utf-8\"\u003e \u003ctitle\u003eExample HTML5 Document\u003c/title\u003e \u003c/head\u003e \u003cbody\u003e \u003cp\u003eTest\u003c/p\u003e \u003c/body\u003e \u003c/html\u003e Code block indented with four spaces \u003c!doctype html\u003e \u003chtml lang=\"en\"\u003e \u003chead\u003e \u003cmeta charset=\"utf-8\"\u003e \u003ctitle\u003eExample HTML5 Document\u003c/title\u003e \u003c/head\u003e \u003cbody\u003e \u003cp\u003eTest\u003c/p\u003e \u003c/body\u003e \u003c/html\u003e Code block with Hugo’s internal highlight shortcode \u003c!doctype html\u003e \u003chtml lang=\"en\"\u003e \u003chead\u003e \u003cmeta charset=\"utf-8\"\u003e \u003ctitle\u003eExample HTML5 Document\u003c/title\u003e \u003c/head\u003e \u003cbody\u003e \u003cp\u003eTest\u003c/p\u003e \u003c/body\u003e \u003c/html\u003e ","date":"Wednesday, Mar 11, 2020","objectID":"/markdown-syntax-guide/:6:0","tags":["WordPress","markdown","syntax"],"title":"Markdown Syntax Guide","uri":"/markdown-syntax-guide/"},{"categories":["WordPress","Other"],"content":"List Types Ordered List First item Second item Third item Unordered List List item Another item And another item Nested list Fruit Apple Orange Banana Dairy Milk Cheese ","date":"Wednesday, Mar 11, 2020","objectID":"/markdown-syntax-guide/:7:0","tags":["WordPress","markdown","syntax"],"title":"Markdown Syntax Guide","uri":"/markdown-syntax-guide/"},{"categories":["WordPress","Other"],"content":"Other Elements — abbr, sub, sup, kbd, mark GIF is a bitmap image format. H2O Xn + Yn: Zn Press CTRL+ALT+Delete to end the session. Most salamanders are nocturnal, and hunt for insects, worms, and other small creatures. The above quote is excerpted from Rob Pike’s talk during Gopherfest, November 18, 2015. ↩︎ ","date":"Wednesday, Mar 11, 2020","objectID":"/markdown-syntax-guide/:8:0","tags":["WordPress","markdown","syntax"],"title":"Markdown Syntax Guide","uri":"/markdown-syntax-guide/"},{"categories":["PowerShell","Azure","Scripts","DevOps"],"content":"This script New-AzPipeline lets you programmatically create Azure Pipelines based on your folder structure. It browses through your folder structure for pipeline.yml files and creates corresponding Azure Pipelines in Azure DevOps. It has several features like creating pipelines based on a specific folder/module version, latest version or just creates all. It also compares against existing Pipelines and skips these. ","date":"Saturday, Mar 7, 2020","objectID":"/programmatically-create-azure-pipelines/:0:0","tags":["Azure","pipeline","CICD","Automation","PowerShell","DevOps","ADO","scripts","CICD","iac"],"title":"Programmatically Create Azure Pipelines","uri":"/programmatically-create-azure-pipelines/"},{"categories":["PowerShell","Azure","Scripts","DevOps"],"content":"Script The script New-AzPipeline.ps1 can be found here. A newer version is in progress already and will be available soon. Check out our Scripts repository for any future updates. ","date":"Saturday, Mar 7, 2020","objectID":"/programmatically-create-azure-pipelines/:1:0","tags":["Azure","pipeline","CICD","Automation","PowerShell","DevOps","ADO","scripts","CICD","iac"],"title":"Programmatically Create Azure Pipelines","uri":"/programmatically-create-azure-pipelines/"},{"categories":["PowerShell","Azure","Scripts","DevOps"],"content":"Parameters Mandatory OrganizationName (Azure DevOps Organization) ProjectName (Azure DevOps Project) RepositoryName (Azure DevOps Repository) Optional FolderPath (Azure Pipelines folder path) Version (Pipeline versions to be used) PipelinePath (Local folder path to be browsed) Latest (Latest Pipeline versions to be created) All (All Pipeline versions to be created) ","date":"Saturday, Mar 7, 2020","objectID":"/programmatically-create-azure-pipelines/:2:0","tags":["Azure","pipeline","CICD","Automation","PowerShell","DevOps","ADO","scripts","CICD","iac"],"title":"Programmatically Create Azure Pipelines","uri":"/programmatically-create-azure-pipelines/"},{"categories":["PowerShell","Azure","Scripts","DevOps"],"content":"Usage .\\New-AzPipeline.ps1 -OrganizationName \u003cOrganizationName\u003e -ProjectName \u003cProjectName\u003e -RepositoryName \u003cRepositoryName\u003e -Latest ","date":"Saturday, Mar 7, 2020","objectID":"/programmatically-create-azure-pipelines/:3:0","tags":["Azure","pipeline","CICD","Automation","PowerShell","DevOps","ADO","scripts","CICD","iac"],"title":"Programmatically Create Azure Pipelines","uri":"/programmatically-create-azure-pipelines/"},{"categories":["PowerShell","Azure","Scripts","DevOps"],"content":"Requirements Azure DevOps Login (az devops login) or Azure Login (az login / Connect-AzAccount) Azure DevOps Organization Azure DevOps Project Azure DevOps Repository ","date":"Saturday, Mar 7, 2020","objectID":"/programmatically-create-azure-pipelines/:4:0","tags":["Azure","pipeline","CICD","Automation","PowerShell","DevOps","ADO","scripts","CICD","iac"],"title":"Programmatically Create Azure Pipelines","uri":"/programmatically-create-azure-pipelines/"},{"categories":["PowerShell","Azure","Scripts","DevOps"],"content":"Permissions Contributor (Azure DevOps) ","date":"Saturday, Mar 7, 2020","objectID":"/programmatically-create-azure-pipelines/:5:0","tags":["Azure","pipeline","CICD","Automation","PowerShell","DevOps","ADO","scripts","CICD","iac"],"title":"Programmatically Create Azure Pipelines","uri":"/programmatically-create-azure-pipelines/"},{"categories":["PowerShell","Azure","Scripts","DevOps"],"content":"Base Folder Structure root ┣ Module0 ┃ ┗ 2020-04-20 ┃ ┃ ┗ Pipeline ┃ ┃ ┣ pipeline.variables.yml ┃ ┃ ┗ pipeline.yml ┣ Module1 ┃ ┣ 2020-01-09 ┃ ┃ ┗ Pipeline ┃ ┃ ┣ pipeline.variables.yml ┃ ┃ ┗ pipeline.yml ┃ ┗ 2020-03-20 ┃ ┃ ┗ Pipeline ┃ ┃ ┣ pipeline.variables.yml ┃ ┃ ┗ pipeline.yml ┃ . ┃ . ┗ New-AzPipeline.ps1 Based on this structure New-AzPipeline.ps1 recognizes Module0 and Module1 as Pipeline names 2020-04-20 as corresponding versions Azure Pipelines based on pipeline.yml ","date":"Saturday, Mar 7, 2020","objectID":"/programmatically-create-azure-pipelines/:6:0","tags":["Azure","pipeline","CICD","Automation","PowerShell","DevOps","ADO","scripts","CICD","iac"],"title":"Programmatically Create Azure Pipelines","uri":"/programmatically-create-azure-pipelines/"},{"categories":["PowerShell","Azure","Scripts","DevOps"],"content":"Impressions . . . . ","date":"Saturday, Mar 7, 2020","objectID":"/programmatically-create-azure-pipelines/:7:0","tags":["Azure","pipeline","CICD","Automation","PowerShell","DevOps","ADO","scripts","CICD","iac"],"title":"Programmatically Create Azure Pipelines","uri":"/programmatically-create-azure-pipelines/"},{"categories":["PowerShell","Azure","Scripts","DevOps"],"content":"Contributions Thanks to @simonbms for helping me out with the object comparions in order to have only the latest folders picked for the -Latest switch. References Scripts Repository New-AzPipeline.ps1 ","date":"Saturday, Mar 7, 2020","objectID":"/programmatically-create-azure-pipelines/:8:0","tags":["Azure","pipeline","CICD","Automation","PowerShell","DevOps","ADO","scripts","CICD","iac"],"title":"Programmatically Create Azure Pipelines","uri":"/programmatically-create-azure-pipelines/"},{"categories":["Scripts"],"content":"You can do everything with PowerShell, as well as locking your workstation with one simple function call. Just call the LockWorkstation() function in user32.dll and that’s it! The fine thing is, it works on a local and a remote session. Function Lock-WorkStation { rundll32.exe user32.dll, LockWorkStation } References Locking a Computer ","date":"Sunday, Aug 4, 2019","objectID":"/function-lock-workstation-locally-and-remotely/:0:0","tags":["lock workstation","PowerShell","posh","remote"],"title":"Function Lock-Workstation locally and remotely","uri":"/function-lock-workstation-locally-and-remotely/"},{"categories":["Azure","Scripts"],"content":"If you need a script which outputs you the overall VMCore amount per region, there you go. This is a snippet from a RunBook which iterates also through each subscription before, so you would get all amount of used cores per subscription as well as per region. I took the advantage of using Get-AzVMUsage. ","date":"Thursday, Feb 21, 2019","objectID":"/get-azure-vm-cores-per-region/:0:0","tags":["Automation","PowerShell","Azure","iaas","VM","vCPU","azureregion"],"title":"Get Azure VM Cores (vCPUs) per Region","uri":"/get-azure-vm-cores-per-region/"},{"categories":["Azure","Scripts"],"content":"Snippet $AzureLocations = Get-AzLocation | Select-Object DisplayName $Result = @() ForEach ($AzureLocation in $AzureLocations) { $CoreAmount = Get-AzVMUsage -Location $AzureLocation.DisplayName | Where-Object { $_.Name.Value -eq \"virtualMachines\" } | Select-Object currentvalue $Object = New-Object -Type PSCustomObject -Property @{ Location = $AzureLocation.Displayname VMCores = $CoreAmount.CurrentValue } $Object $Result += $Object } $Result ","date":"Thursday, Feb 21, 2019","objectID":"/get-azure-vm-cores-per-region/:0:1","tags":["Automation","PowerShell","Azure","iaas","VM","vCPU","azureregion"],"title":"Get Azure VM Cores (vCPUs) per Region","uri":"/get-azure-vm-cores-per-region/"},{"categories":["Azure","Scripts"],"content":"Output . ","date":"Thursday, Feb 21, 2019","objectID":"/get-azure-vm-cores-per-region/:0:2","tags":["Automation","PowerShell","Azure","iaas","VM","vCPU","azureregion"],"title":"Get Azure VM Cores (vCPUs) per Region","uri":"/get-azure-vm-cores-per-region/"},{"categories":["Azure","Windows","Windows Server","Scripts"],"content":"Okay, yeah there are plenty of scripts out which give you local accounts via WMI or ADSI and yes scripts exist also which give you all local groups but there is not one which gives you both (of course there are also some) but what if you’re looking to implement this as a CustomScriptExtension to your Azure VM? Especially if the Custom Script Extension Output is limited to only 4096 characters? Did you know that? This script was developed to minimize the output of local accounts and their group memberships and gives you a meaningful expression of user accounts sitting on your VM. Check this out: Function Get-LocalAccountMemberships { \u003c# .SYNOPSIS Retrieves local user accounts and their group memberships. .DESCRIPTION Retrieves local user accounts and their group memberships. For having the Output prepared for a Custon Script Extension in Azure Export-Clixml is being used which can then be deserialized with Import-Clixml. .PARAMETER ComputerName A single Computer or an array of computer names. The default is localhost ($env:COMPUTERNAME). .PARAMETER GroupName A single stirng or array of Groups to be verified. .EXAMPLE PS Get-LocalAccountMemberships -GroupName Users,Administrators .NOTES Author: Sebastian Gräf Email: [email protected] Date: December 15, 2017 PSVer: 3.0/4.0/5.0 #\u003e param( [parameter(ValueFromPipeline = $true, ValueFromPipelineByPropertyName = $true)] [array]$ComputerName = $Env:COMPUTERNAME, [array]$GroupName ) $results = @() $arr = @() $LocalAccounts = Get-WmiObject -ComputerName $ComputerName -Class Win32_UserAccount -Filter \"LocalAccount='$True'\" foreach ($LocalAccount in $LocalAccounts) { $obj = New-Object PSObject $obj | Add-Member NoteProperty \"LocalAccount\" $LocalAccount.Caption $obj | Add-Member NoteProperty \"Disabled\" $LocalAccount.Disabled foreach ($Group in $GroupName) { $wmi = Get-WmiObject -ComputerName $ComputerName -Query \"SELECT * FROM Win32_GroupUser WHERE GroupComponent=`\"Win32_Group.Domain='$ComputerName',Name='$Group'`\"\" foreach ($item in $wmi) { $data = $item.PartComponent -split \"\\,\" $domain = ($data[0] -split \"=\")[1] $name = ($data[1] -split \"=\")[1] $arr += (\"$domain\\$name\").Replace(\"\"\"\", \"\") } if ($arr -contains $LocalAccount.Caption) { $obj | Add-Member NoteProperty \"$Group\" \"true\" } else { $obj | Add-Member NoteProperty \"$Group\" \"false\" } } $results += $obj } $results } $output = Get-LocalAccountMemberships -GroupName Users, Administrators, 'Remote Desktop Users' | Export-Clixml output.xml gc output.xml A simple output of Get-LocalAccountMemberships looks like this ![1](featured-image-preview.png\" caption=\".\" \u003e}} So, while exporting your output with the help of Export-Clixml and showing the output of your XML file again in the console output as a readable xml structure. ![2](2021-02-21-20-56-04.png\" caption=\".\" \u003e}} Once a script was being run on a VM the common output of the Custom Script Extensions looks like this: ![3](2021-02-21-20-56-27.png\" caption=\".\" \u003e}} You can grab this output of your CustomScriptExtension on your VM with the help of that: $output = Get-AzureRmVMDiagnosticsExtension -ResourceGroupName $ResourceGroupName -VMName $vmName -Name \"Get-LocalAccountMemberships\" -Status $output = $output.SubStatuses[0].Message $output -replace '\\\\n','' | Out-File output.xml -Force $output = Import-Clixml output.xml The trick here is to get the output message from your CustomScriptExtension with $output.SubStatuses[0].Message, removing every \\n, save it and import it as a readable xml structure. Once digested and imported with Import-Clixml you get the same output as before. . So why are we doing it this way? Consider that, you’re going to execute a Custom Script Extension on your VM without having remote access to it, yes you don’t have access to it BUT you know you can use the Azure VMAgent which is by default installed on every VM in Azure. With having a Custom Script Extension executed including any of your script, e.g. “Get-LocalAccountMemberships” you can grab details from your machine wit","date":"Thursday, Feb 21, 2019","objectID":"/get-local-account-memberships/:0:0","tags":["Windows","PowerShell","posh","remote"],"title":"Get Local Account Memberships","uri":"/get-local-account-memberships/"},{"categories":["WordPress"],"content":"1 The main configuration of WordPress is handled by wp-config.php which is responsible for database access, language, API keys, security and more. Anything changed in this file takes direct influence to your site’s settings and appearance. Settings set in wp-config.php are considered as global and overwrite all parameters in your admin panel. General define('WP_HOME', 'https://www.graef.io'); // Main URL define('WP_SITEURL', 'https://www.graef.io'); // Site URL Deactivate Automatic Updates define( 'AUTOMATIC_UPDATER_DISABLED', true ); Disable Filter for Uploads define( 'ALLOW_UNFILTERED_UPLOADS', true ); Automatically Empty Recycle Bin define ('EMPTY_TRASH_DAYS', 7); define ('EMPTY_TRASH_DAYS', 0); Deactivate Editor for Themes and Plugins define( 'DISALLOW_FILE_EDIT', true ); Set Default Theme for WordPress define( 'WP_DEFAULT_THEME', 'default-theme-folder-name' ); ","date":"Thursday, Feb 21, 2019","objectID":"/wordpress-basic-settings-in-configphp/:0:0","tags":["WordPress","wp-config","PHP"],"title":"Wordpress Basic Settings in config.php","uri":"/wordpress-basic-settings-in-configphp/"},{"categories":["Azure"],"content":"Requirements ","date":"Thursday, Aug 9, 2018","objectID":"/get-aad-tenant-id-and-subscription-id/:1:0","tags":["default","PowerShell","posh","remote","Azure","AAD"],"title":"Get Azure Active Directoy Tenant ID and Subscription ID","uri":"/get-aad-tenant-id-and-subscription-id/"},{"categories":["Azure"],"content":"Install the Azure PowerShell Install-Module -Name Az -AllowClobber -Scope CurrentUser ","date":"Thursday, Aug 9, 2018","objectID":"/get-aad-tenant-id-and-subscription-id/:1:1","tags":["default","PowerShell","posh","remote","Azure","AAD"],"title":"Get Azure Active Directoy Tenant ID and Subscription ID","uri":"/get-aad-tenant-id-and-subscription-id/"},{"categories":["Azure"],"content":"Get Tenant and Subscription Details during Login To get your Tenant ID / Name and Subscription ID / Name you have several options with PowerShell, one option is using Connect-AzAccount which directly gives you your default Subscription Name as well as your default Tenant ID after logging in. ","date":"Thursday, Aug 9, 2018","objectID":"/get-aad-tenant-id-and-subscription-id/:2:0","tags":["default","PowerShell","posh","remote","Azure","AAD"],"title":"Get Azure Active Directoy Tenant ID and Subscription ID","uri":"/get-aad-tenant-id-and-subscription-id/"},{"categories":["Azure"],"content":"Get Tenant and Subscription details from the Context Another option is using Get-AzContext | Select-Object * which gets the metadata used to authenticate Azure Resource Manager requests. Some more details you get with Get-Aztenant which gets tenants that are authorized for your current user. References Install Azure PowerShell Connect-AzAccount Get-AzContext Get-AzTenant ","date":"Thursday, Aug 9, 2018","objectID":"/get-aad-tenant-id-and-subscription-id/:3:0","tags":["default","PowerShell","posh","remote","Azure","AAD"],"title":"Get Azure Active Directoy Tenant ID and Subscription ID","uri":"/get-aad-tenant-id-and-subscription-id/"},{"categories":["scripts"],"content":"As published in my Technet Gallery Script Center here, you can get current timezones remotely via PowerShell with Get-Timezones. Get-Timezones is using WMI to communicate with your servers. Function Get-Timezones { \u003c# .SYNOPSIS Retrieves timezones of local or remote computers via WMI. .DESCRIPTION Retrieves timezones of local or remote computers via WMI. .PARAMETER ComputerName A single Computer or an array of computer names. The default is localhost ($env:COMPUTERNAME). .PARAMETER Credentials Commit Credentials for a different domain. .PARAMETER Verbose Run in Verbose Mode. .EXAMPLE PS C:\\\u0026amp;amp;amp;gt; Get-Timezones -ComputerName (gc 'C:\\computers.txt') -Credentials Get-Credential ComputerName TimezoneName DaylightSaving TimezoneCaption ------------ ------------ -------------- --------------- SERVER01 W. Europe Standard Time yes (UTC+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna SERVER02 W. Europe Standard Time yes (UTC+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna .NOTES Author: Sebastian Gräf Website: https://graef.io Email: [email protected] Date: June 27, 2017 PSVer: 3.0/4.0/5.0 #\u003e [Cmdletbinding()] Param ( [Parameter(ValueFromPipelineByPropertyName = $true, ValueFromPipeline = $true)] $ComputerName = $Env:COMPUTERNAME, [Parameter(ValueFromPipelineByPropertyName = $true, ValueFromPipeline = $true)] [ValidateNotNull()] [System.Management.Automation.PSCredential][System.Management.Automation.Credential()] $Credentials = [System.Management.Automation.PSCredential]::Empty ) Begin { Write-Verbose \" [$($MyInvocation.InvocationName)] :: Start Process\" $Results = @() $ProgressCounter = 0 } Process { foreach ($Computer in $ComputerName) { $ProgressCounter++ Write-Progress -activity \"Running on $Computer\" -status \"Please wait ...\" -PercentComplete (($ProgressCounter / $ComputerName.length) * 100) if (Test-Connection $Computer -Count 1 -Quiet) { Write-Verbose \" [$($MyInvocation.InvocationName)] :: Processing $Computer\" try { $win32_timezone = Get-WmiObject -Class win32_timezone -ComputerName $Computer -ErrorAction Stop -Credential $Credentials if ($win32_timezone.DaylightBias -eq 0) { $daylightsaving = \"no\" } else { $daylightsaving = \"yes\" } $obj = New-Object -Type PSCustomObject -Property @{ ComputerName = $Computer TimezoneCaption = $win32_timezone.Caption TimezoneName = $win32_timezone.StandardName DaylightSaving = $daylightsaving } $Results += $obj } catch { Write-Verbose \" Host [$Computer] Failed with Error: $($Error[0])\" } } else { Write-Verbose \" Host [$Computer] Failed Connectivity Test\" } } $Results | select ComputerName, TimezoneName, DaylightSaving, TimezoneCaption } End { Write-Progress -activity \"Running on $Computer\" -Status \"Completed.\" -Completed Write-Verbose \" [$($MyInvocation.InvocationName)] :: End Process\" } } This will give you following output: . With Set-Timezone you can set timezones remotely. If you need to disable automatic daylight saving time you can add the additional parameter DSTOff. Function Set-Timezones { \u003c# .SYNOPSIS Sets and retries timezones on local or remote computers via WinRM and enables or disables daylight saving. .DESCRIPTION This PowerShell function sets and retrieves the timezone of a local or remote computer using WinRM (Invoke-Command). It also enables or disables dayligt saving times. The function accepts pipeline input and outputs to the pipeline as well as credentials if you want to run it for a specific domain. If the remote computer won't be accessible the function will catch the error but will continue to work. You can use tzutil /l to get a list of available time zone IDs. Timezone TimeZoneID --------- ---------- (UTC-06:00) Central Time (US \u0026amp;amp;amp;amp; Canada) Central Standard Time (UTC) Coordinated Universal Time UTC (UTC+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna W. Europe Standard Time .PARAMETER TimezoneID A single Computer or an array of computer names. The default is localhost ($env:COMPUTERNAME). .PARAMETER DSTOff Enables or disables ","date":"Thursday, Jun 28, 2018","objectID":"/get-and-set-timezones-via-powershell-remotely/:0:0","tags":["timezone","PowerShell","posh","remote"],"title":"Get and Set Timezones via PowerShell remotely","uri":"/get-and-set-timezones-via-powershell-remotely/"},{"categories":["Windows Server"],"content":"The purpose of this article is to show how to adjust Windows Failover Cluster “Response to resource failure” policy. If a Cluster Core Resource like File Share Witness or Disk Quorum is in a failing state and offline, the cluster runs into jeopardy and will fail once the active node gets rebooted as no vote can be set to the quorum. To avoid this you should decrease the value of time your cluster core resource attempts to restart. The lower the value the higher the amount of retries your resource has to restart itself. To increase restart attempts for the Cluster Core Resource you need to adjust the “Response to resource failure” policy from one hour to 15 minutes. . So with a given period for restarts of 15 minutes, maximum restarts in the specified period of 1 and “If all the restart attempts fail, begin restarting again after the specified period of period” of 15 minutes. The resource will try restarting itself again every 15 minutes instead every hour until it’s brought back up online. ","date":"Monday, Jan 1, 2018","objectID":"/how-to-adjust-windows-failover-cluster-response-to-resource-failure-policy/:0:0","tags":["iaas","wfc","Windows","failover","cluster","failovercluster","quorum","policy","filesharewitness","jeopardy"],"title":"How to adjust Windows Failover Cluster 'Response to Resource Failure' Policy","uri":"/how-to-adjust-windows-failover-cluster-response-to-resource-failure-policy/"},{"categories":["Other"],"content":"If you’re running a webserver it may be you have to run different services on the same port and need to use multiple IP addresses on the same network interface. This can happen for SMTP or Exchange Servers using several connectors. In Windows Server 2008 (don’t ask me why Micrososft did that) and as well as in Windows Server 2012 the source IP address on a network interface will always be the lowest IP address, in this case, the latest you have added. So what do you have to do now is to set a flag on the particular IP address to say: “Hey you, please don’t be the primary source address.” and this happens with the help of netsh.exe command. You can view the status of all of your IP address and their status with following command: netsh int ipv4 show ipaddresses level=verbose The IP address which should not act as a primary source address needs to be flagged with SkipAsSource=true. As this command is only working while adding a new address but you want to have this flag enabled at one of your IP addresses, you first have to remove it and then add it again with this command: netsh int ipv4 add address \"Local Area Connection\" 10.0.0.4/24 SkipAsSource=true First, with the help of the SkipAsSource flag it instructs Windows not to use this IP as primary source IP and secondly, it prevents its registration in DNS (if dynamic registration is enabled). ","date":"Sunday, Nov 26, 2017","objectID":"/set-primary-ip-address-order-with-multiple-ip-addresses-on-a-nic/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows"],"title":"Set primary IP address order with multiple IP addresses on a NIC","uri":"/set-primary-ip-address-order-with-multiple-ip-addresses-on-a-nic/"},{"categories":["Azure"],"content":"In case you get this error below, this error is mostly due to outdated Modules or at least in this case the AzureRM module. Your Azure credentials have not been set up or have expired, please run Login-AzureRMAccount to set up your Azure credentials. . Easily go ahead and update your PowerShell or Azure PowerShell Modules to the latest version 4.4.0 and it should be gone. You can do this while easily using Update-Module or just go ahead and use Install-Module AzureRm -Force ","date":"Thursday, Oct 26, 2017","objectID":"/expired-azure-credentials/:0:0","tags":["Automation","credentials","az","Azure","PowerShell","azurerm"],"title":"Expired Azure credentials","uri":"/expired-azure-credentials/"},{"categories":["Scripts","PowerShell","Windows Server"],"content":"Every System Administrator comes into a situation where you want to see who and how many users were logged on to your servers either via Remote Desktop or via script. This little function evaluates the System log with the help of Get-EventLog and delivers you the latest logon and logoff events for every user. I’ve left you the Invoke-Command commented if you want to use PowerShell Remoting (WinRM). Function Get-LogonHistory { \u003c# .SYNOPSIS Retrieves history of last logged on users with usernames and respective logoff/logon times. .DESCRIPTION Retrieves history of last logged on users with usernames and respective logoff/logon times. .PARAMETER Newest This command gets the most recent entries from the event log according to its value. .PARAMETER ComputerName A single Computer or an array of computer names. The default is localhost ($env:COMPUTERNAME). .PARAMETER Credentials Commit Credentials for a different domain. .PARAMETER Verbose Run in Verbose Mode. .EXAMPLE PS C:\\\u003e Get-LogonHistory -ComputerName SERVER1 -Credentials Get-Credential -Newst .NOTES Author: Sebastian Gräf Email: [email protected] Date: April 15, 2017 PSVer: 2.0/3.0/4.0/5.0 #\u003e [Cmdletbinding()] Param ( [Parameter(ValueFromPipelineByPropertyName = $true, ValueFromPipeline = $true)] [string[]]$ComputerName = $Env:COMPUTERNAME, [Parameter(ValueFromPipelineByPropertyName = $true, ValueFromPipeline = $true)] [string]$Newest = 10, [Parameter(ValueFromPipelineByPropertyName = $true, ValueFromPipeline = $true)] [ValidateNotNull()] [System.Management.Automation.PSCredential][System.Management.Automation.Credential()] $Credentials = [System.Management.Automation.PSCredential]::Empty ) Begin { Write-Verbose \" [$($MyInvocation.InvocationName)] :: Start Process\" $Results = @() $ProgressCounter = 0 } Process { foreach ($Computer in $ComputerName) { $ProgressCounter++ Write-Progress -activity \"Running on $Computer\" -status \"Please wait ...\" -PercentComplete (($ProgressCounter / $ComputerName.length) * 100) if (Test-Connection $Computer -Count 1 -Quiet) { Write-Verbose \" [$($MyInvocation.InvocationName)] :: Processing $Computer\" try { $ELogs = Get-EventLog System -Source Microsoft-Windows-WinLogon -ComputerName $Computer -Newest $Newest #$ELogs = Invoke-Command { param ($Newest) Get-EventLog System -Source Microsoft-Windows-WinLogon -Newest $Newest } -ArgumentList $Newest -ComputerName $Computer ForEach ($Log in $ELogs) { If ($Log.InstanceId -eq 7001) { $EventType = \"Logon\" } ElseIf ($Log.InstanceId -eq 7002) { $EventType = \"Logoff\" } Else { Continue } $Results += New-Object PSObject -Property @{ User = (New-Object System.Security.Principal.SecurityIdentifier $Log.ReplacementStrings[1]).Translate([System.Security.Principal.NTAccount]) Time = $Log.TimeWritten 'Event Type' = $EventType } } $Results } catch { Write-Verbose \" Host [$Computer] Failed with Error: $($Error[0])\" } } else { Write-Verbose \" Host [$Computer] Failed Connectivity Test\" } } $result | Select User, Time, \"Event Type\" | Sort Time -Descending } End { Write-Progress -activity \"Running on $Computer\" -Status \"Completed.\" -Completed Write-Verbose \" [$($MyInvocation.InvocationName)] :: End Process\" } } Please keep in mind, it evaluates every event, this means even if a user was doing actions remotely using a powershell script or just logged on, it will be displayed as well. If you want to distinguish between script logons you can easily have a look at the logon and logoff times. If a user account was only logged for some seconds … then this is an indicator for a remote script logon. The script will give you following output: While using following command you can also query this function to more than just one server: . Get-LogonHistory -ComputerName (gc C:\\computers.txt) -Newest 30 ","date":"Thursday, Oct 5, 2017","objectID":"/get-logonhistory-who-was-logged-on-to-my-server/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows"],"title":"Get-LogonHistory: Who was logged on to my server?","uri":"/get-logonhistory-who-was-logged-on-to-my-server/"},{"categories":["Scripts","PowerShell","Windows Server","Windows"],"content":"Okay at the end it’s a simple $PSVersion wrapped in an Invoke-Command but hey these simple things are needful in case you need to run it against of 100s of servers and not just locally. With the help of Invoke-Command via WinRM and $PSVersionTable.psversion wrapped in a foreach you can retrieve PowerShell version of your remote computers indifferent if you need to use credentials to run it against a different domain than you currently reside. Just use the below function Get-PSVersions, simple but good. Function Get-PSVersions { \u003c# .SYNOPSIS Gets the PowerShell version on a local or remote computer using Invoke-Command. .DESCRIPTION Gets the PowerShell version on a local or remote computer using Invoke-Command. .PARAMETER ComputerName A single Computer or an array of computer names. The default is localhost ($env:COMPUTERNAME). .PARAMETER Credentials Commit PSCredential object or using Get-Credentials. .PARAMETER Verbose Run in Verbose Mode. .EXAMPLE PS C:\u003e Get-PSVersions -ComputerName Server01,Server02 Major Minor Build Revision PSComputerName ----- ----- ----- -------- -------------- 5 1 14393 1066 Server01 5 1 14393 1066 Server02 .EXAMPLE PS C:\u003e Get-PSVersions -ComputerName Server01,Server02 -Credentials Get-Credentials .EXAMPLE PS C:\u003e Get-PSVersions -ComputerName (Get-Content C:ServerList.txt) .LINK \u003cblockquote class=\"wp-embedded-content\" data-secret=\"55AoiBUWnd\"\u003e\u003ca href=\"https://graef.io/\"\u003eHome\u003c/a\u003e\u003c/blockquote\u003e\u003ciframe title=\"“Home” — Sebastian Gräf\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; clip: rect(1px, 1px, 1px, 1px);\" src=\"https://graef.io/embed/#?secret=55AoiBUWnd\" data-secret=\"55AoiBUWnd\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"\u003e\u003c/iframe\u003e .NOTES Author: Sebastian Gräf Email: [email protected] Date: September 9, 2017 PSVer: 3.0/4.0/5.0 #\u003e [Cmdletbinding()] Param ( [Parameter( Mandatory = $false, ValueFromPipeline = $true, ValueFromPipelineByPropertyName = $true)] [string[]]$ComputerName = $Env:COMPUTERNAME, [Parameter( ValueFromPipelineByPropertyName = $true, Mandatory = $false, ValueFromPipeline = $true)] [Alias( 'PSCredential' )] [ValidateNotNull()] [System.Management.Automation.PSCredential] [System.Management.Automation.Credential()] $Credentials = [System.Management.Automation.PSCredential]::Empty ) Begin { Write-Verbose \" [$($MyInvocation.InvocationName)] :: Start Process\" $ProgressCounter = 0 } Process { foreach ($Computer in $ComputerName) { $ProgressCounter++ Write-Progress -activity \"Running on $Computer\" -status \"Please wait ...\" -PercentComplete (($ProgressCounter / $ComputerName.length) * 100) if (Test-Connection $Computer) { Write-Verbose \" [$($MyInvocation.InvocationName)] :: Processing $Computer\" try { $PSVersion = Invoke-Command -Computername $Computer -Scriptblock { $PSVersionTable.psversion } -Credential $Credentials $PSVersion } catch { Write-Verbose \" Host [$Computer] Failed with Error: $($Error[0])\" } } else { Write-Verbose \" Host [$Computer] Failed Connectivity Test \" } } } End { Write-Verbose \" [$($MyInvocation.InvocationName)] :: End Process\" } } While processing your list of computers a nice Write-Progress will give you some details about the status: . Once finished your output will look like this: . ","date":"Friday, Sep 22, 2017","objectID":"/get-psversions-retrieve-powershell-version-remotely/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows"],"title":"Get-PSVersions: Retrieve Powershell version remotely","uri":"/get-psversions-retrieve-powershell-version-remotely/"},{"categories":["Windows Server","Windows"],"content":"Respective Microsoft’s Technet article regarding Resolve-DnsName I have created a function with the ability to run it against more than only one computer. Resolve-DnsNames performs a DNS name query resolution for the specified name. Function Resolve-DnsNames { \u003c# .SYNOPSIS Resolves IP or DNS name for one or more computers. .DESCRIPTION Resolves IP or DNS name for one or more computers. .PARAMETER ComputerName A single Computer or an array of computer names. The default is localhost ($env:COMPUTERNAME). .PARAMETER IPAddress Commit IPAddress to resolve. .PARAMETER Verbose Run in Verbose Mode. .EXAMPLE PS C:\u003e Resolve-DnsNames graef.io Name Type TTL Section IPAddress ---- ---- --- ------- --------- graef.io AAAA 86336 Answer 2a01:488:42:1000:50ed:84e8:ff91:1f91 graef.io A 86336 Answer 80.237.132.232 .EXAMPLE PS C:\u003e Resolve-DnsNames 80.237.132.232 Name Type TTL Section NameHost ---- ---- --- ------- -------- 232.132.237.80.in-addr.arpa PTR 40292 Answer graef.io .EXAMPLE PS C:\u003e Resolve-DnsNames -ComputerName (Get-Content C:ServerList.txt) .LINK Home .NOTES Author: Sebastian Gräf Email: [email protected] Date: September 10, 2017 PSVer: 2.0/3.0/4.0/5.0 #\u003e [Cmdletbinding()] Param ( [Parameter( Mandatory = $false, ValueFromPipeline = $true, ValueFromPipelineByPropertyName = $true)] [string[]]$ComputerName = $Env:COMPUTERNAME ) Begin { Write-Verbose \" [$($MyInvocation.InvocationName)] :: Start Process\" $ProgressCounter = 0 $array=@() } Process { foreach ($Computer in $ComputerName) { $ProgressCounter++ Write-Progress -activity \"Running on $Computer\" -status \"Please wait ...\" -PercentComplete (($ProgressCounter / $ComputerName.length) * 100) Write-Verbose \" [$($MyInvocation.InvocationName)] :: Processing $Computer\" try { $Resolution = Resolve-DnsName $Computer -ErrorAction SilentlyContinue $obj = New-Object PSObject $obj | Add-Member NoteProperty ComputerName ($Computer) $obj | Add-Member NoteProperty Name ($Resolution.Name) $obj | Add-Member NoteProperty Type ($Resolution.Type) $obj | Add-Member NoteProperty TTL ($Resolution.TTL) $obj | Add-Member NoteProperty Section ($Resolution.Section) $obj | Add-Member NoteProperty NameHost ($Resolution.NameHost) $obj | Add-Member NoteProperty IPAddress ($Resolution.IPAddress) $array += $obj } catch { Write-Verbose \" Host [$Computer] Failed with Error: $($Error[0])\" } } $array | ft } End { Write-Verbose \" [$($MyInvocation.InvocationName)] :: End Process\" } } References Resolve-DnsName ","date":"Sunday, Sep 10, 2017","objectID":"/resolve-dnsnames-resolve-dns-or-ip-for-multiple-computers/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows","DNS","IP","dnsforward","dnsreverse"],"title":"Resolve-DnsNames: Resolve DNS or IP for multiple Computers","uri":"/resolve-dnsnames-resolve-dns-or-ip-for-multiple-computers/"},{"categories":["Scripts","PowerShell","Windows Server"],"content":"Get DELL Tag and Express Service Code remotely for more than one computer. You can also commit different credentials for a specific domain. The below function Get-DellTags retrieves serials remotely from your machines. With the help of the function ConvertSerial the serial gets converted into a DELL Express Service Code on the fly. Function Get-DellTags { \u003c# .SYNOPSIS Retrieves the serial on a local or remote computer using Invoke-Command and converts into a DELL Express Service Code. .DESCRIPTION Gets the PowerShell version on a local or remote computer using Invoke-Command. .PARAMETER ComputerName A single Computer or an array of computer names. The default is localhost ($env:COMPUTERNAME). .PARAMETER Credentials Commit PSCredential object or using Get-Credentials. .PARAMETER Verbose Run in Verbose Mode. .EXAMPLE PS C:\u003e Get-DellTags -ComputerName Server01 ComputerName DellTag ExpressServiceCode URL ------------ ------- ------------------ --- Server01 FV9SAX4 34542623560 https://www.dell.com/support/home/us/en/19/product-support/servicetag/FV9SAX4/Research .EXAMPLE PS C:\u003e Get-DellTags -ComputerName Server01,Server02 -Credentials Get-Credentials .EXAMPLE PS C:\u003e Get-DellTags -ComputerName (Get-Content C:ServerList.txt) .LINK Home .NOTES Author: Sebastian Gräf Email: [email protected] Date: September 9, 2017 PSVer: 2.0/3.0/4.0/5.0 #\u003e [Cmdletbinding()] Param ( [Parameter( Mandatory = $false, ValueFromPipeline = $true, ValueFromPipelineByPropertyName = $true)] [string[]]$ComputerName = $Env:COMPUTERNAME, [Parameter( ValueFromPipelineByPropertyName = $true, Mandatory = $false, ValueFromPipeline = $true)] [Alias( 'PSCredential' )] [ValidateNotNull()] [System.Management.Automation.PSCredential] [System.Management.Automation.Credential()] $Credentials = [System.Management.Automation.PSCredential]::Empty ) Begin { Write-Verbose \" [$($MyInvocation.InvocationName)] :: Start Process\" $ProgressCounter = 0 } Process { foreach ($Computer in $ComputerName) { $ProgressCounter++ Write-Progress -activity \"Running on $Computer\" -status \"Please wait ...\" -PercentComplete (($ProgressCounter / $ComputerName.length) * 100) if (Test-Connection $Computer) { Write-Verbose \" [$($MyInvocation.InvocationName)] :: Processing $Computer\" try { ## If you want to use WMI #$DellServiceTag = Get-WmiObject Win32_Bios -Credential $Credentials -ComputerName $Computer ## If you want to use CIM #$DellServiceTag = Get-CimInstance Win32_Bios -Credential $Credentials -ComputerName $Computer $DellServiceTag = Invoke-Command -Computername $Computer -Scriptblock { Get-CimInstance Win32_Bios } -Credential $Credentials ConvertDellSerial $DellServiceTag.serialnumber $DellTags = @() $WMIObject = New-Object PSObject $WMIObject | add-member Noteproperty ComputerName $Computer -Force $WMIObject | add-member Noteproperty DellTag $DellServiceTag.serialnumber -Force $WMIObject | add-member Noteproperty ExpressServiceCode $Serial -Force $WMIObject | add-member Noteproperty URL \"https://www.dell.com/support/home/us/en/19/product-support/servicetag/$($DellServiceTag.serialnumber)/Research\" -Force $DellTags += $WMIObject $DellTags } catch { Write-Verbose \" Host [$Computer] Failed with Error: $($Error[0])\" } } else { Write-Verbose \" Host [$Computer] Failed Connectivity Test \" } } } End { Write-Verbose \" [$($MyInvocation.InvocationName)] :: End Process\" } } Function ConvertSerial { \u003c# .SYNOPSIS Converts DELL Tag Serial into a DELL Express Service Code. .DESCRIPTION Converts DELL Tag Serial into a DELL Express Service Code. .PARAMETER Serial Commit serial string. .LINK Home .NOTES Author: Sebastian Gräf Email: [email protected] Date: September 9, 2017 PSVer: 2.0/3.0/4.0/5.0 #\u003e param ($Serial) $36Base = \"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ\" $SerialLength = $Serial.length $Script:DellServiceCode = 0 for ($x = 0; $x -lt $SerialLength; $x++) { for ($y = 0; $y -lt 36; $y++) { if ($Serial.substring($x, 1) -eq $36Base.substring($y, 1)) { $answer = (($y) * ([math]::pow(36, $SerialLength - $x - 1))) $Sc","date":"Tuesday, Aug 15, 2017","objectID":"/get-delltags-get-dell-tags-and-express-service-code-remotely/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows"],"title":"Get-DellTags: Get Dell Tags and Express Service Code remotely","uri":"/get-delltags-get-dell-tags-and-express-service-code-remotely/"},{"categories":["Windows","Windows Server"],"content":"Hi there, following function Get-ScsiDisks retrieves disk details for VMWare Guests or any computer with corresponding SCSI disk details like SCSI ID and SCSI Bus. The function concatenates objects consisting of Win32_DiskDrive, Win32_LogicalDisk and Win32_DiskDriveToDiskPartition using WMI. For WinRM you can use Invoke-Command and inject the script. Function Get-ScsiDisks { \u003c# .SYNOPSIS Retrieves disk details for VMWare Guests with corresponding SCSI disk details like SCSI ID and SCSI Bus. .DESCRIPTION Retrieves a concatenated object consisting of Win32_DiskDrive, Win32_LogicalDisk and Win32_DiskDriveToDiskPartition using WMI. For WinRM you can use Invoke-Command and inject the script. .PARAMETER ComputerName A single Computer or an array of computer names. The default is localhost ($env:COMPUTERNAME). .PARAMETER Credentials Commit Credentials for a different domain. .PARAMETER Verbose Run in Verbose Mode. .EXAMPLE PS C:\u003e Get-ScsiDisks ComputerName Disk DriveLetter VolumeName Size FreeSpace DiskModel ------------ ---- ----------- ---------- ---- --------- --------- SERVER \\.PHYSICALDRIVE1 D: Data 767 767 VMware Virtual di... SERVER \\.PHYSICALDRIVE0 C: OS 59 39 VMware Virtual di... .EXAMPLE PS C:\u003e Get-ScsiDisks | Out-GridView .EXAMPLE PS C:\u003e Get-ScsiDisks | ft -a .EXAMPLE PS C:\u003e Get-ScsiDisks -ComputerName (gc 'C:VMs.txt') -Credentials Get-Credential .LINK Home .NOTES Author: Sebastian Gräf Email: [email protected] Date: September 12, 2017 PSVer: 3.0/4.0/5.0 #\u003e [Cmdletbinding()] Param ( [Parameter(ValueFromPipelineByPropertyName = $true, ValueFromPipeline = $true)] $ComputerName = $Env:COMPUTERNAME, [Parameter(ValueFromPipelineByPropertyName = $true, ValueFromPipeline = $true)] [ValidateNotNull()] [System.Management.Automation.PSCredential][System.Management.Automation.Credential()] $Credentials = [System.Management.Automation.PSCredential]::Empty ) Begin { Write-Verbose \" [$($MyInvocation.InvocationName)] :: Start Process\" $result=@() $ProgressCounter = 0 } Process { foreach ($Computer in $ComputerName) { $ProgressCounter++ Write-Progress -activity \"Running on $Computer\" -status \"Please wait ...\" -PercentComplete (($ProgressCounter / $ComputerName.length) * 100) if (Test-Connection $Computer -Count 1 -Quiet) { Write-Verbose \" [$($MyInvocation.InvocationName)] :: Processing $Computer\" try { Get-WmiObject -Class Win32_DiskDrive -ComputerName $Computer -Credential $Credentials | % { $disk = $_ $partitions = \"ASSOCIATORS OF \" + \"{Win32_DiskDrive.DeviceID='$($disk.DeviceID)'} \" + \"WHERE AssocClass = Win32_DiskDriveToDiskPartition\" Get-WmiObject -Query $partitions -ComputerName $Computer -Credential $Credentials | % { $partition = $_ $drives = \"ASSOCIATORS OF \" + \"{Win32_DiskPartition.DeviceID='$($partition.DeviceID)'} \" + \"WHERE AssocClass = Win32_LogicalDiskToPartition\" Get-WmiObject -Query $drives -ComputerName $Computer -Credential $Credentials | % { $obj = New-Object -Type PSCustomObject -Property @{ ComputerName = $Computer Disk = $disk.DeviceID DiskSize = [math]::Truncate($disk.Size / 1GB); DiskModel = $disk.Model Partition = $partition.Name DriveLetter = $_.DeviceID VolumeName = $_.VolumeName Size = [math]::Truncate($_.Size / 1GB) FreeSpace = [math]::Truncate($_.FreeSpace / 1GB) SCSIBus = $disk.SCSIBus SCSITargetId = $disk.SCSITargetId } $result += $obj } } } } catch { Write-Verbose \" Host [$Computer] Failed with Error: $($Error[0])\" } } else { Write-Verbose \" Host [$Computer] Failed Connectivity Test\" } } $result | select ComputerName,Disk,DriveLetter,VolumeName,Size,FreeSpace,DiskModel,Partition,SCSIBus,SCSITargetId } End { Write-Progress -activity \"Running on $Computer\" -Status \"Completed.\" -Completed Write-Verbose \" [$($MyInvocation.InvocationName)] :: End Process\" } } ","date":"Thursday, Aug 10, 2017","objectID":"/get-scsidisks-combine-physicaldisk-and-logicaldisk-objects/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows","SCSI"],"title":"Get-ScsiDisks: Combine Physicaldisk and Logicaldisk Objects","uri":"/get-scsidisks-combine-physicaldisk-and-logicaldisk-objects/"},{"categories":["Scripts","PowerShell","Windows Server"],"content":"If you want to avoid or block coming up your screensaver locally or remotely, take this nice script. It will move your mouse cursor and presses space every minute to simulate user activity. This all happens with the help of Wscript.Shell for sending a key press and System.Windows.Forms.Cursor for moving the cursor for 1 pixel. Function Simulate-Activity { \u003c# .SYNOPSIS Simulates Mouse and Keyboard Activity to avoid Screensaver coming up. .DESCRIPTION Simulates Mouse and Keyboard Activity to avoid Screensaver coming up. .PARAMETER Minutes Commit minutes for simulating activity. If no string given you will be asked. .PARAMETER Verbose Run in Verbose Mode. .EXAMPLE PS C:\u003e Simulate-Activity -Minutes 60 .LINK Home .NOTES Author: Sebastian Gräf Email: [email protected] Date: September 9, 2017 PSVer: 3.0/4.0/5.0 #\u003e [Cmdletbinding()] Param ( [Parameter(Mandatory = $false)] [string]$Minutes ) Begin { Write-Verbose \" [$($MyInvocation.InvocationName)] :: Start Process\" } Process { Add-Type -AssemblyName System.Windows.Forms $shell = New-Object -com \"Wscript.Shell\" $pshost = Get-Host $pswindow = $pshost.ui.rawui $pswindow.windowtitle = 'Activity-Simulator' if(!$minutes) { $Minutes = Read-Host -Prompt \"Enter minutes for simulating activity\" } for ($i = 0; $i -lt $Minutes; $i++) { cls $timeleft = $Minutes - $i Write-Host (Get-Date -Format HH:mm:ss) -ForegroundColor Green Write-Host 'Time left: ' -NoNewline Write-Host \"$timeleft\" -ForegroundColor Red -NoNewline Write-Host ' Minutes' $shell.sendkeys(' ') for ($j = 0; $j -lt 6; $j++) { for ($k = 0; $k -lt 10; $k++) { Write-Progress -Activity 'Simulating activity ..' -PercentComplete ($k * 10) -Status \"Please ... don't disturb me.\" Start-Sleep -Seconds 1 } } $Pos = [System.Windows.Forms.Cursor]::Position $x = ($pos.X % 500) + 1 $y = ($pos.Y % 500) + 1 [System.Windows.Forms.Cursor]::Position = New-Object System.Drawing.Point($x, $y) } } End { Write-Verbose \" [$($MyInvocation.InvocationName)] :: End Process\" } } If you don’t commit a string for the Minutes parameter it will ask you how many minutes you want to simulate activity: . After that the activity is running for the given amount of time: . ","date":"Thursday, Aug 10, 2017","objectID":"/simulate-activity-simulate-user-mouse-and-keyboard-input-with-powershell/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows"],"title":"Simulate-Activity: Simulate user mouse and keyboard input with PowerShell","uri":"/simulate-activity-simulate-user-mouse-and-keyboard-input-with-powershell/"},{"categories":["Scripts","PowerShell","Windows Server"],"content":"In this case we’re going to use the method GetHostAddresses of the Dns class of the Sytem.Net namespace. For PowerShell 2.0 you can use following Windows PowerShell One Liners: ","date":"Monday, Jul 10, 2017","objectID":"/resolve-dns-and-ip-addresses-with-powershell/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows","DNS","IP"],"title":"Resolve DNS and IP addresses with PowerShell","uri":"/resolve-dns-and-ip-addresses-with-powershell/"},{"categories":["Scripts","PowerShell","Windows Server"],"content":"Name to IP Address (DNS Forward) [System.Net.Dns]::GetHostAddresses('graef.io') [System.Net.Dns]::GetHostAddresses('graef.io').IPAddressToString ","date":"Monday, Jul 10, 2017","objectID":"/resolve-dns-and-ip-addresses-with-powershell/:0:1","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows","DNS","IP"],"title":"Resolve DNS and IP addresses with PowerShell","uri":"/resolve-dns-and-ip-addresses-with-powershell/"},{"categories":["Scripts","PowerShell","Windows Server"],"content":"IP Address to Name (DNS Reverse) [System.Net.Dns]::GetHostbyAddress('85.13.135.42') HostName Aliases AddressList -------- ------- ----------- graef.io {} {85.13.135.42} As of PowerShell 4.0 you can use the Cmdlet Resolve-DnsName as well as for both Forward and Reverse: Resolve-DnsName graef.io Name Type TTL Section IPAddress ---- ---- --- ------- --------- graef.io AAAA 72711 Answer 2a01:488:42:1000:50ed:84e8:ff91:1f91 graef.io A 72711 Answer 80.237.132.232 Resolve-DnsName 80.237.132.232 Name Type TTL Section NameHost ---- ---- --- ------- -------- 232.132.237.80.in-addr.arpa PTR 32738 Answer graef.io References System.Net.Dns ","date":"Monday, Jul 10, 2017","objectID":"/resolve-dns-and-ip-addresses-with-powershell/:0:2","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows","DNS","IP"],"title":"Resolve DNS and IP addresses with PowerShell","uri":"/resolve-dns-and-ip-addresses-with-powershell/"},{"categories":["Windows Server"],"content":"Open PowerShell or Commmand Shell, enter the following set devmgr_show_nonpresent_devices=1 devmgmt.msc . This will open your device manager with the ability to show hidden devices. Click on View and select “Show hidden devices”. . Select your hidden device you need to remove, right click and select uninstall. That’s it! . In case you need to remove multiple orphan / hidden / ghosted devices like disks on Windows Failover Cluster Nodes you can use Ghostbuster for that. See my other article about Ghostbuster as well. ","date":"Tuesday, Jun 27, 2017","objectID":"/show-non-present-devices-in-windows-with-device-manager/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows"],"title":"Show non-present devices in Windows with Device Manager","uri":"/show-non-present-devices-in-windows-with-device-manager/"},{"categories":["Scripts","PowerShell","Windows Server"],"content":"While using SQL AlwaysOn in connection with a third node on a different location as a DR solution, the third node is needed only in case there is a real Disaster Recovery. For this purpose in most cases a manual intervention/failover is needed and you need to align your DR Cluster Node Weight for it. You can use following PowerShell snippet: Import-Module FailoverClusters $node = 'AlwaysOnNode1' (Get-ClusterNode $node).NodeWeight = 0 $cluster = (Get-ClusterNode $node).Cluster $nodes = Get-ClusterNode -Cluster $cluster $nodes | Format-Table -property NodeName, State, NodeWeight References Configure Cluster Quorum NodeWeight Settings ","date":"Friday, Jun 23, 2017","objectID":"/configure-cluster-quorum-node-weight-settings/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows"],"title":"Configure Cluster Quorum node weight settings","uri":"/configure-cluster-quorum-node-weight-settings/"},{"categories":["WordPress"],"content":"I’m not quite sure why WordPress does this but it seems to happen after some major upgrades or if one of your themes is not aligned correctly. The editor is jumping up and down while writing posts and this drives me insane! It seems not to happen everytime but mostly after a wordpress update happened or changing something on the interface. Nevertheless if you want ot have it disbaled easily go to Screen Options at the upper right corner of your screen and toggle it. (Screen options) Under Addition Settings you can see a field called Enable full-height editor and distraction-free functionality. (Additional settings) Untick it and be happy! ","date":"Friday, May 19, 2017","objectID":"/wordpress-editor-jump-fix/:0:0","tags":["WordPress","editor","fix"],"title":"Wordpress Editor Jump Fix","uri":"/wordpress-editor-jump-fix/"},{"categories":["PowerShell"],"content":"Have you ever wondered if there’s an opportunity to easily create a GUI out of every PowerShell Cmdlet? In many cases this can be very useful for example if your Cmdlet hs too many parameters to list or just to see what’s it offering on Common parameters as well. Just use the Show-Command cmdlet with any PowerShell cmdlet to bring up a GUI interface. Let’s try this with the Get-Service Cmdlet and see what’s happening! Show-Command Get-Service . You will have three options for executing your command: Run, Copy (for the clipboard), or Cancel. References Show-Command ","date":"Saturday, Apr 15, 2017","objectID":"/show-command-get-a-gui-interface-for-any-powershell-cmdlet/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows"],"title":"Show-Command: Get a GUI-Interface for any Powershell Cmdlet","uri":"/show-command-get-a-gui-interface-for-any-powershell-cmdlet/"},{"categories":["Scripts","PowerShell","Windows Server"],"content":"I just wanted to share you a tiny snippet if you’re looking for a simple PowerShell liner to simply get a KB Hotfix installed. It also verifies if the KB is installed already. $SourceFolder = \"C:\\Software\" $KB = \"KB2999226\" if (-not (Get-Hotfix -Id $KB)) { Start-Process -FilePath \"wusa.exe\" -ArgumentList \"$SourceFolder\\Windows8.1-KB2999226-x64.msu /quiet /norestart\" -Wait } else { Write-Host \"$KB already installed.\" } Okay this is a small one for you guys but trust me I will wrap it for you into a big function if you want to use it with more than one server or even Credentials. ","date":"Sunday, Apr 2, 2017","objectID":"/how-to-install-kb-hotfixes-only-if-they-are-not-installed/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows"],"title":"How to Install KB Hotfixes only if they are not installed","uri":"/how-to-install-kb-hotfixes-only-if-they-are-not-installed/"},{"categories":["Scripts","PowerShell","Windows Server"],"content":"If you’re searching for Windows Server Licensing and Activation Details of your Windows machine you can use following statements slmgr.vbs -dlv which will give you following output . If you’re searching for some other details like your client machine ID (CMID) you can use following statement slmgr.vbs -dli which will give you following output . Both commands work for KMS and Non-KMS clients. It’s always good to have an opportunity to retrieve details like License Status Volume Activation Expiration Client Machine ID (CMID) KMS machine IP address KMS machine extended PID Activation interval Renewal interval Activation ID Application ID Extended PID Produkt Key Channel Volume Installation ID Volume activation expiration Remaining Windows rearm count Remaining SKU rearm count ","date":"Friday, Mar 10, 2017","objectID":"/kms-how-to-find-windows-server-licensing-details/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows"],"title":"KMS: How to Find Windows Server Licensing Details","uri":"/kms-how-to-find-windows-server-licensing-details/"},{"categories":["Windows"],"content":"As described in one of my previous articles How to show nonpresent devices in Windows with Device Manager you were able to show nonpresent devices in device manager and delete them one bye one. Now with GhostBuster which is a free portable tool, it lets you remove multiple or all old, non-present, unused and previous hardware devices from your Windows computer. Non-present devices are those devices that were once installed, but are now no longer attached to the computer. When you use the built-in Windows Device Manager, you can delete the devices one-by-one but not all at once. This is what’s possible with this Device Cleanup Tool. You can select one, multiple or all non-present devices and delete them together. This can be helpful in cases you have to clean up ghosted disks from Failover Cluster nodes to avoid disk reservation issues. Just give it a try, you can download it here. GhostBuster References GhostBuster ","date":"Sunday, Feb 12, 2017","objectID":"/ghostbuster-remove-all-non-present-hidden-devices-from-windows/:0:0","tags":["Automation","scripts","PowerShell","posh","pwsh","Windows","ghostbuster","device manager"],"title":"GhostBuster: Remove all Non-present and hidden devices in Windows","uri":"/ghostbuster-remove-all-non-present-hidden-devices-from-windows/"},{"categories":null,"content":"make sure to run hugo in the root directory of the project where you /content folder and config.toml file are located hugo new “posts/understanding-and-improving-your-cybersecurity-posture-the-importance-of-strong-passwords-2FA-and-awareness-of-phishing-scams/index.md” start hugo server including drafts (-D) hugo serve -D ","date":"Monday, Jan 1, 0001","objectID":"/cheatsheet/:0:0","tags":null,"title":"","uri":"/cheatsheet/"}]