I’m writing to you live from Urban Foxes in Belhaven Heights.
This is my first Featured Project post. Once a week I want to highlight an interesting software project I find. This week’s is one which has been hilarious to read up on this morning over coffee at Urban Foxes: The Fuck.
From the GitHub repo:
The Fuck is a magnificent app, inspired by a @liamosaur
tweet, that corrects errors in previous console commands.
The most eye-catching and hilarious part of the project’s README is the example gif, clearly demonstrating what I find so funny about this project.
I installed The Fuck using Homebrew on Mac OS.
I don’t have enough outlets for sharing memes, and especially spicy memes like this one (sent to me by Matthew Lewis) need a special memorialization.
I am playing around with a new build pipeline. I want to be able to create Spring Boot applications and build Docker images that can run on a Raspberry Pi computer. Because the Pi uses an ARM processor the image build step is more involved. In this post, I will outline how I built an ARM-specific image from my Spring Boot demo codebase.
First, we use Maven to compile the Java code into a JAR. This can be accomplished from the commandline.
mvnw spring-boot:build-image
Successful build console output:
[INFO] Scanning for projects...
[INFO]
[INFO] --------------------------< dev.michael:demo >--------------------------
[INFO] Building demo 0.0.1
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] >>> spring-boot-maven-plugin:2.6.3:build-image (default-cli) > package @ demo >>>
[INFO]
[INFO] --- maven-resources-plugin:3.2.0:resources (default-resources) @ demo ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Using 'UTF-8' encoding to copy filtered properties files.
[INFO] Copying 1 resource
[INFO] Copying 0 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ demo ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 5 source files to C:\Workspace\demo\demo\target\classes
[INFO]
[INFO] --- maven-resources-plugin:3.2.0:testResources (default-testResources) @ demo ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Using 'UTF-8' encoding to copy filtered properties files.
[INFO] skip non existing resourceDirectory C:\Workspace\demo\demo\src\test\resources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ demo ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to C:\Workspace\demo\demo\target\test-classes
[INFO]
[INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ demo ---
[INFO]
[INFO] -------------------------------------------------------
[INFO] T E S T S
[INFO] -------------------------------------------------------
[INFO] Running dev.michael.demo.DemoApplicationTests
... (tests omitted)
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.863 s - in dev.michael.demo.DemoApplicationTests
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:23 min
[INFO] Finished at: 2022-03-11T22:08:02-06:00
[INFO] ------------------------------------------------------------------------
Docker users may be familiar with the docker build
command. buildx
is “build
extended” which is an experimental CLI command to enable the creation of platform-specific Docker images. Here’s the command I used to generate one for the 32-bit OS running on my Pis.
docker buildx build --push --platform=linux/arm/v7 --tag=michaellambgelo/demo:latest .
The output shows Docker using Buildkit. The dockerfile lets Docker know how to package and start an application container. With openjdk:8-jdk-alpine
included as the container OS runtime the Spring Boot application is then pushed to the Docker registry. From the registry, the image can be downloaded to whatever Pi/Docker configuration I want.
[+] Building 38.7s (11/11) FINISHED
=> [internal] booting buildkit 1.5s
=> => starting container buildx_buildkit_magical_thompson0 1.5s
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 141B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/openjdk:8-jdk-alpine 3.1s
=> [auth] library/openjdk:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.8s
=> => transferring context: 30.75MB 0.7s
=> [1/2] FROM docker.io/library/openjdk:8-jdk-alpine@sha256:94792824df2df33402f201713f932b58cb9de94a0cd524164a0f2283343547b3 7.1s
=> => resolve docker.io/library/openjdk:8-jdk-alpine@sha256:94792824df2df33402f201713f932b58cb9de94a0cd524164a0f2283343547b3 0.1s
=> => sha256:43ff02e0daa55f3a4df7eab4f7128e6b39b03ece75dfeedb53bf646fce03529c 67.40MB / 67.40MB 6.4s
=> => sha256:962e53e3f8337e63290eb26703e31f0e87d70db371afae581ad3898b1dccb972 238B / 238B 0.1s
=> => sha256:856f4240f8dba160c5323506c1e9a4dbaaca840bf1b0c244af3b8d1b42b0f43b 2.35MB / 2.35MB 0.9s
=> => pushing layers 22.4s
=> => pushing manifest for docker.io/michaellambgelo/demo:latest@sha256:87640f491f579237e378aa832614df036720da21ed0d74cbe248ba1ed6ae4acb 0.3s
=> [auth] michaellambgelo/demo:pull,push token for registry-1.docker.io 0.0s
=> [auth] michaellambgelo/demo:pull,push token for registry-1.docker.io
I want to look into using Spotify Dockerfile Maven which takes an opinionated view of the Maven and Docker build processes, enabling users to combine the two build steps into one.
Using Docker, I wanted to create a new subdomain pointing to my Pi cluster to show the uptime for the cluster.
Docker combines an application and its depdencies in a package which can be executed in an isolated container runtime.
A container is an isolated runtime environment managed by an operating system. A virtual machine (VM) is an abstraction of a physical machine which runs an isolated operating system. VMs are limited for application development compared to containers as multiple containers can run in parallel on a single node, networked together by Docker.
I wanted to be able to host applications on my Pi cluster using Docker. I’ve got a deployment running Uptime Kuma, a self-hosted monitoring dashboard.
If you visit status.michaellamb.dev you can view this application. Of course, to set up the subdomain required me to make some DNS changes. Since I only use Google as the domain registrar and prefer the encrypted email service provided by ProtonMail for secure correspondence, I opted to use Cloudflare to provide DNS entries. Cloudflare is able to use my home network IP and serve michaellamb.dev websites and apps. All of this Cloudflare directed traffic is first sent through a proxy manager.
I am interested in hosting more applications and I wanted to leverage nginx as a proxy manager as I add more applications to my cluster. Nginx Proxy Manager fits the bill as it allows me to add proxy hosts and serve applications over SSL using Let’s Encrypt. As traffic comes in from Cloudflare, the proxy manager directs it to the correct node in my cluster.
status.michaellamb.dev is where you can go to see the applications I’m running on my cluster. Since I consider the cluster to be an opportunity to pratice learning in public, I hope it is an interesting way for the people who happen to read this blog to stay connected. If you’re in Jackson and want to talk tech, connect with me on my socials. Find them all at link.michaellamb.dev.
Spring Boot is a powerful project from the Spring ecosystem which enables developers to maximize their leverage of Spring applications. Standalone projects can be generated at start.spring.io with any other additional dependencies of Spring project included in just a few clicks.
I have created a Spring Boot demo project available on my GitHub. I plan to use this project to demonstrate some tasks I perform regularly in Spring Boot.
If you’ve been a follower of this blog you might recall I have previously integrated Swagger UI into a Go application (check out this blog post from October 2021).
Swagger is a suite of tools which seeks to provide OpenAPI specifications and definitions. Codebases can be generated from a Swagger doc, just as an existing codebase can be documented by adding Swagger-identifiable annotations.
In this post I will show how I integrated Springfox Swagger UI into my Spring Boot application.
This configuration assumes an existing Spring Boot project and integrates io.springfox/swagger-boot-starter (version 3.0.0).
Add the Springfox Spring Boot starter dependency.
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-boot-starter</artifactId>
<version>3.0.0</version>
</dependency>
springfox-boot-starter
provides the following artifacts from io.springfox
:
springfox-oas
springfox-data-rest
springfox-bean-validators
springfox-swagger2
springfox-swagger-ui
Wherever your Spring Boot app starts is dependent on your project. In my demo, this is a file called DemoApplication.java
.
In this file, only two annotations need to be added to the base class:
@EnableOpenApi
@EnableSwagger2
If it doesn’t exist yet, create a new Java class called AppConfiguration.java
. The class itself will be empty but it will have a few annotations that will enable Springfox to scan the application code and identify endpoints.
@Configuration
@EnableWebMvc
@ComponentScan("dev.michael.demo")
@EnableOpenApi
public class AppConfiguration {
}
SpringConfig.java
will implement the WebMvcConfigurer
interface. It will override a couple of methods so that Spring Boot can serve Swagger UI alongside the Spring Boot app.
@Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
registry.
addResourceHandler("/swagger-ui/**")
.addResourceLocations("classpath:/META-INF/resources/webjars/springfox-swagger-ui/")
.resourceChain(false);
}
addResourceHandlers
will enable Spring Boot to find Swagger resources.
@Override
public void addViewControllers(ViewControllerRegistry registry) {
registry.addViewController("/swagger-ui/")
.setViewName("forward:" + "/swagger-ui/index.html");
}
addViewControllers
will enable Spring Boot to serve the main Swagger UI page.
Swagger UI will now automatically generate API documentation every time the Spring Boot application is started.
In what was probably my favorite college course I implemented a client/server system which would transmit messages across a lossy network. A client connects to the server to send a file. By tracking packet acknowledgements from the server, the client should repeat lost packets by using a go-back- n protocol where n is limited to 7 packets.
This was one of three programming assignments for a class called Data Communication Networks. I took this in the spring semester of 2016. My professor was Dr. Maxwell Young. In this post, I’ll give an overview of my archived repo available on GitHub. This assignment was foundational to my understanding of distributed systems. Using an emulator program written by Dr. Young to simulate a lossy network, I learned the importance of writing resilient code by enforcing an algorithmic protocol we referred to as go-back- n which is used by the client to ensure that all data was received by the server. The server would keep track of what packets it receives from a client and will only accept the expected sequence number. Below, you’ll find my comments on the code I submitted as part of this assignment. We were permitted a partner when writing the algorithm itself, but the networking code we were expected to complete on our own.
Usually, a server is a program that runs as a service on an operating system for high availability to accept any number of clients. Because this particular client/server implementation was for a class assignment I opted to close my server once the program requirements are satisfied. Obviously this wasn’t built for any sort of production use and is provided only as educational material.
makefile
I was constantly rebuilding the entire distributed system and running it locally on my machine. My makefile here creates the necessary executable files from source in order: first create the packet.o
executable, then use packet.o
to build the client and server executables. Running make main
accomplishes all of this. Because the client and server are tiny the build process executed in less than a second which made the feedback loop very quick when I was writing this.
Running make zip
was a requirement from the assignment and so I included it here to make a zip file called pa2.zip
including the specified source files.
make clean
deletes all executables in the directory
client.cpp
Shout out to Hannah Thiessen (referenced here by her former name, Hannah Church) who helped write the client implementation of go-back- n. We spent a long, long Saturday in Butler Hall working on just that portion alone. JJ Kemp observed our work, as was his way.
Libraries included in the client provide basic i/o and networking.
packet.h
provides a class to represent individual packets sent between the client and server. Constants are defined to represent the type of packet received; PACKET_ACK
describes a packet from the server acknowledging receipt; PACKET_DATA
describes a packet from the client with data; PACKET_EOT_SERV2CLI
is a one-time packet type which tells the client to close; PACKET_EOT_CLI2SERV
is a one-time packet type which tells the server to close because of the end of transmission.
The client uses libraries in the std
namespace.
Command line arguments to the client configure how the client will connect to the emulator program, provided as part of the assignment as an executable with its own CLI. The server connects to the opposite send/receive ports to enable two-way messaging with the client. The filename argument specifies what file to parse into packets for messaging.
Here’s what’s going on in this snippet: everything here lives in the main
function. This sets up networking i/o and file i/o, as well as some tracking variables related to packet messaging and logging including a timeout for ACK
responses from the server.
If I were to refactor this code, I would separate the network and file i/o from the main logic for sending packets to the server. The separations would either move the relevant parts to separate functions or into an individual class. Though there is some overlap between the client and server in regards to file and network i/o, I can imagine creating indivual classes for each concern would still result in client-specific and server-specific i/o classes.
server.cpp
The best resource I found for the basics of getting a server up and running was from linuxhowtos.org. I referenced it here because I’m a good boy who doesn’t want to plagiarize. Always give credit.
Libraries included in the server provide basic i/o and networking.
packet.h
provides a class to represent individual packets sent between the client and server. Constants are defined to represent the type of packet received; PACKET_ACK
describes a packet from the server acknowledging receipt; PACKET_DATA
describes a packet from the client with data; PACKET_EOT_SERV2CLI
is a one-time packet type which tells the client to close; PACKET_EOT_CLI2SERV
is a one-time packet type which tells the server to close because of the end of transmission.
The server uses libraries in the std
namespace.
randomPort
is a function which accepts an int
parameter and returns an int
value. n_port
is the port number with which a random port will communicate, so the value is required to be different from n_port
. There is a problem with the assignment to val
in the while
loop: a random value is modded by 64511 which is computed first and 1024 is added to the computed remainder. When I originally wrote this I was attempting to enforce some guarantee that the random port number wouldn’t use one of the commonly reserved ports (below 1024) and the thought process I had is an attempt at hashing out the logic: a random value will be some large number seeded on the system clock and so I should attempt to generate a value between the possible port numbers while accounting for the fact that ports lower than 1024 are reserved, so I should add that amount back.
The problem is that there is a chance this algorithm accidentally creates a port number outside of the acceptable range 65535. A correct implementation would have separated the conditions: once a value is assigned modded by 65535, check if the value is less than 1024. If yes, then re-roll.
Here’s how I’d rewrite it now:
int randomPort(int n_port) //pick a random port
{
int val = n_port;
srand(time(NULL));
while(val == n_port && val < 1024) //ensure r_ is different from n_
//ensure r_ is not a reserved port below 1024
val = rand() % 65535;
return val;
}
If the return value would result in a reserved port number, the while
loop would simply generate a new value until it finds one which satisfies the exit condition of the loop.
The error
method provides a convenient way to fail fast by printing out a custom error message and exiting. I’ve learned a lot about error handling since this program but this is still one of the most elegant pieces of code I’ve ever seen.
Command line arguments to the server configure how the server will connect to the emulator program, provided as part of the assignment as an executable with its own CLI. The client connects to the opposite send/receive ports to enable two-way messaging with the server. The filename argument specifies which file to use for logging messages.