Planet CDOT (Telescope)

Friday, November 4, 2022


Ray Gervais

What Does "Cloud Native" Even Mean?

What is Cloud Native? This question has popped up in conversations at least a dozen times every month, and even I find myself wondering what the absolute definition and understanding is of the term is. So, let’s see how the big-tech companies describe the term, followed by my opinion on said term in the current context of things.

How Microsoft’s Documentation Defines Cloud Native

Cloud native is about speed and agility. Business systems are evolving from enabling business capabilities to weapons of strategic transformation that accelerate business velocity and growth. It’s imperative to get new ideas to market immediately.

At the same time, business systems have also become increasingly complex with users demanding more. They expect rapid responsiveness, innovative features, and zero downtime. Performance problems, recurring errors, and the inability to move fast are no longer acceptable. Your users will visit your competitor. Cloud-native systems are designed to embrace rapid change, large scale, and resilience.

Here are some companies who have implemented cloud-native techniques. Think about the speed, agility, and scalability they’ve achieved.

https://learn.microsoft.com/en-us/dotnet/architecture/cloud-native/definition

How Amazon Web Services Defines Cloud Native

Cloud native is the software approach of building, deploying, and managing modern applications in cloud computing environments. Modern companies want to build highly scalable, flexible, and resilient applications that they can update quickly to meet customer demands. To do so, they use modern tools and techniques that inherently support application development on cloud infrastructure. These cloud-native technologies support fast and frequent changes to applications without impacting service delivery, providing adopters with an innovative, competitive advantage.

https://aws.amazon.com/what-is/cloud-native/

How Red Hat Defines Cloud Native Applications

Cloud-native applications are a collection of small, independent, and loosely coupled services. They are designed to deliver well-recognized business value, like the ability to rapidly incorporate user feedback for continuous improvement. In short, cloud-native app development is a way to speed up how you build new applications, optimize existing ones, and connect them all. Its goal is to deliver apps users want at the pace a business needs.

But what about the “cloud” in cloud-native applications? If an app is “cloud-native,” it’s specifically designed to provide a consistent development and automated management experience across private, public, and hybrid clouds. Organizations adopt cloud computing to increase the scalability and availability of apps. These benefits are achieved through self-service and on-demand provisioning of resources, as well as automating the application life cycle from development to production.

https://www.redhat.com/en/topics/cloud-native-apps

How GitLab Defines Cloud Native

Cloud native is a term used to describe software that is built to run in a cloud computing environment. These applications are designed to be scalable, highly available, and easy to manage. By contrast, traditional solutions are often designed for on-premises environments and then adapted for a cloud environment. This can lead to sub-optimal performance and increased complexity.

The Cloud Native Computing Foundation (CNCF), an open source software organization focused on promoting the cloud-based app building and deployment approach, defines cloud native technologies as those that “empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.”

As enterprises move more of their workloads to the cloud, they are increasingly looking for solutions that are cloud native. Cloud-native solutions are designed from the ground up to take advantage of the unique characteristics of the cloud, such as scalability, elasticity, and agility.

https://about.gitlab.com/topics/cloud-native/

How the Cloud Native Computing Foundation Defines Cloud Native

Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.

The Cloud Native Computing Foundation seeks to drive adoption of this paradigm by fostering and sustaining an ecosystem of open source, vendor-neutral projects. We democratize state-of-the-art patterns to make these innovations accessible for everyone.

https://www.cncf.io/about/who-we-are/

What Are My Experiences with the Definition(s)?

Even while drafting this, I feel that my section here feels like a rant or complaint, but that is not by any means the intention. Instead, I’m simply explaining my own experience as someone who last year, had no understanding of the concept of cloud native and how it differed from the more traditional developer experience.

From a high level, these definitions of the term Cloud Native and the medium(s) which relate to it make sense, but I’d argue that there’s a different train of thought when exploring the lower levels of what make up a platform built using Cloud Native technologies. The understanding of how we once deployed applications onto dedicated virtual machines (both on prem and in the public cloud) doesn’t translate to the same deployment model implied by the term Cloud Native, nor do they map sometimes to the same strategies:

  • Whereas we used to have CI/CD as the universal understanding of Pipeline automation (and all the fun which accompanied it), we now have the Gitops CI/CD model enabling one to deploy and reconcile state defined within a repository.

  • Whereas we wrote applications which had to be painstakingly compatible with a multitude of hosts (for fear of vendor / OS lock in), we now strive to write applications which map directly to a single container configuration and thus, is distributed/deployed by default in that format.

  • Whereas a multitude of application, ops, networking, admin, and other teams all had to be involved in the deployment of a simple application, Cloud Native empower a single application team to fully manage their own deployments and requirements, at the sake of now requiring an understanding of a hodgepodge of platform tools to enable their various processes. For example, one may have the following tools deployed and configured on their dev cluster, or integrated into their workflow:

    • ArgoCD: GitOps tool allowing us to define the state of our deployments and know that ArgoCD will attempt to mitigate drift where ever possible
    • Trafeik: HTTP reverse proxy which can both auto-configure itself and provide auto service discovery between your micro-services when configured.
    • Helm: A package manager for kubernetes applications and configurations.
    • Kustomize: Kubernetes native configuration management, allowing for patching, updating, other resource wizardry without needing template files. kubectl apply -k will become your bread and butter command in most cases.

When it comes to navigating the world of Cloud Native, the CNCF Cloud Native Landscape is a fantastic way to see all the various projects you can leverage in your own environments, but be cautious as to not overwhelmed at all the possiblities! I find it helps to try out a tool from each category, and learning more of that tool if you’re content with it before trying another in the same category, but that can equally be confusing if you’re not sure what you need. Luckily, Anurang Kumar’s block post on kubesimply has a fantastic guide which breaks down all the categorys and types, which I’ve included below:

Containerization: The first step you do is write your application code and containerize that with the help of containerization tools like docker, crio, podman etc. This is where containerization comes into picture and there is a separate section for that in the CNCF landscape. Container runtimes are responsible for running our containerized workloads on a host operating system.

Container Registry: After containerizing our application, we need to store our images somewhere and this is where container registry comes into picture. We have a separate section for container registry and this portion lists all the projects under this category.

CI/CD: After containerizing our application we need to set up CI/CD for our application so that it automates the process of building an image and pushing it to the container registry.

Scheduling and Orchestration: To manage large number of containers we need orchestration engine that will ease out the process of managing containers. Container orchestration engine is responsible for managing, scaling, deploying and networking of containers. The most popular container orchestration engine is Kubernetes. Kubernetes was the first project to graduate from CNCF.

Application Definition and Image build: In kubernetes we have to deal with multiple k8s manifests. We can use helm to easily install applications into our kubernetes clusters. Helm is a package manager for kubernetes applications. Monokle is another tool that eases out the process of managing YAML files. It provides one command installation of apps. We can upgrade a release with one command and if something goes wrong we can rollback as well. In this section we also have multiple build tools like packer, buildpacks etc.

Observability and Analysis: we need to add some observability and analysis layer to our application and here we will use different tools for monitoring, logging, tracing, profiling. Monitoring is an important aspect of application. Monitoring gives you actionable insights into application.

Service Proxy and Service Mesh: Now to expose our application to end users we need to expose our services. To manage our services we need service proxy and if we have proliferation of services then to manage that we need service mesh. It is also used to trace the traffic and observe the application. Services also act as load balancers.

Cloud Native Network: We need to set up the networking for our cluster and for that we have the cloud native network section. Cilium and CNI popular projects in this category. Cilium is an open source software for providing, securing and observing network connectivity between container workloads.

Security and Compliance: We can’t ignore security of our Kubernetes cluster and for that we have the security and compliance section and under this we have popular projects like OPA, falco etc. We can also define policies for our kubernetes clusters with the help of OPA, kyverno etc.

Database: A cloud native database is designed to take full advantage of cloud and distributed systems.

Streaming and Messaging: Cloud-native streaming is the processing of continuous data streams and is commonly used for large amounts of data and Cloud-native messaging is a communications model that enables asynchronous push-based communications between microservices.

Resources

by Ray Gervais at Fri Nov 04 2022 00:00:00 GMT+0000 (Coordinated Universal Time)

Wednesday, November 2, 2022


David Humphrey

Building an Understanding

Recently I've been working with some students as they debug their programs and reflecting on their approaches.  Most thrash wildly when something doesn't work, sure that if they just do this-one-thing, or that, everything will work!  They cast madly about for the magical StackOverflow- or Google-incantation that will put their code back on the rails.  When it doesn't come, they blame themselves instead of their process.

First, let's begin where we all begin: lost.  I think it's important to call this out, because I see people assuming that only they are lost.  They aren't.  Programming, even when you're an expert, always involves doing something you've never done before, even if using tools or techniques that are familiar: imagine driving your car, but navigating it somewhere new that you've never been.

Second, what is the opposite of lost?  It's an interesting question because it requires one to think about the feelings associated with being somewhere.  To be lost is to not know where you are.  But even when you aren't lost, you are still in the same place.  How you interpret that experience is important.

Is found the opposite of lost?  I think many people assume so.  They imagine a perfect binary inversion of lost into found, a eureka moment that brings everything sharply into focus, all at once.  There are a handful of times where this happens, but for the most part, I've discovered that it's better to give up the idea of finding what I need and instead focus on understanding how to build it.

For me, understanding isn't a switch that starts out in the off position.  Most of the time, I am neither lost nor found, but moving in between.  In fact the space between these two "binary" extremes turns out to be exceedingly large, like sliding along the z-axis in what looks like a simple two dimension co-ordinate system.

Understanding, when it comes, does so in small amounts.  I have to collect it over time.  Rather than walking through a door that separates lost and found, I have to assemble materials needed in order to build a path between them.

Understanding is built, slowly.  You often have to put in a lot of time and effort.  It's usually not possible to take a shortcut.  The length of time it takes, the frustration, the lack of clarity--none of this is a reflection on you.  This is what it takes to build an understanding.

by David Humphrey at Wed Nov 02 2022 15:02:57 GMT+0000 (Coordinated Universal Time)

Thursday, October 20, 2022


Alex Romanova

How Telescope made me a better Developer

Starter info

If you already know about me, you can skip this section
Firstly, I am a student of a Software Development degree. I will graduate in 2 more semesters. I am currently in my 2nd semester of co-op. In Winter 2021-2022 I have taken a course called "Open source Project", which involved all of us students working on this single open source project - Telescope. This project uses React, docker, docusaurus, supabase, and other tools. In the end - it was up to us, students to take it in a direction we want to evolve it in.

Real experience

So what was so special about that class?

Team size

Firstly, it was the amount of people working on a single project. That semester - it was 17 of us. So far the biggest groups we ever had were of 4 people. We had longer projects before, where we did planning and implementation over several semesters within the same group. However, it was never that big before.

This emulates a more realistic work environment. Currently, in my co-op, I am a part of a ~20 person team, all have their roles, but all work on a single tool. We all communicate between each other, ask for help, information, updates, arrange meetings and get to know each other.

Existing codebase

So far in our college assignments, even if those were larger projects, and even in our personal projects - we always had to start anew. No matter what our task was, we create it from 0, and usually we get pretty familiar with our own code and tools.

With Telescope - that was way different. We already had a working application that was handed onto us by the previous class. We had a lot of code, tools and decisions that we have never seen before, and we had to adjust to that environment. We also had many of their older problems, bugs and unexplored directions.

That, again, simulates a realistic work environment well. You will be new in your team, while they all have made a lot of progress. You will see their code and will have to understand it, and change it to add your own improvements. Learning to integrate into an existing environmnent is a very useful skill to develop.

Freedom of exploration

With college - you don't get much freedom. Most of the time - you have a specific task, with specific instructions and concrete tools to implement it. We did have some freedom in our other classes... sure. In our larger project, we got to decide what to develop. However, time and knowledge constraints usually led us to choose a familiar path. If we want to deliver a product for a grade - in the end we better stick with a tool we can rely on.

Telescope, however, embraced exploration. Not only we didn't have a definite task - we were the ones to decide what the tasks will be. We got to choose what to work on personally, and could even come up with tasks we would assign to someone else. It's not only the tasks, but also the tools. Want to add a completely new framework? Use a new language? No problem. It's on you to make it work, but it is allowed and is supported - us, students, like new toys to play with. If you yourself get excited with using this shiny new language - others will get excited to help you.

Management

Now, my favorite part.
In college, we never get to be the boss. You usually deliver to your professor. And even on your co-op, or some freelance work - you are usually the newcomer, the noob, the lowest of the foodchain. You are never the one to make decisions.

Telescope made us temporary managers. We had to lead weekly meetings, and were responsible for that week's progress. You had to know what was going on. You had to make sure people do their jobs. You were the one to say - "no, let's focus on this and not that". And it was awesome.

Now, don't get me wrong, it was terrifying. How - me - a student that knows nothing will lead other people? I just got here, guys. However, let me explain how and why exactly it was awesome. Or, at the very least - useful.

Dev VS Manager

I, as most of us here, have always thought - "Management? Who wants to do that... What fun is it to talk to people who do the work? It's much cooler to be the one doing the work, actually coding." Until.. I tried to be one in Telescope. And I realised - somehow, I want to be aware what's going on. I want to know what our progress is. I want to prioritise directions that are essential.

Firstly it was because all of those things mentioned... WERE WRONG. I see a documentation problem, and I WANT TO FIX IT. And NOBODY ELSE DOES! It has to be me. To fix those things, you need to know what's going on... so, ok. I gotta find out what's going on. I need to ask people, read code, talk to them. BECAUSE NOBODY WILL! And as I'm fixing all these annoying little steps, I realise I'm actually doing.. management. Since I am learning what everyone is doing, getting just enough information to be aware, but not deep enough to actually code - I get to be aware of everything.

In Telescope usually people would go into their own niches. They will explore a new tool, or develop a new service and will become "kings" in their own little kingdom. Now, suddenly, we have technically a large project - but in reality - we have many newly established kingdoms scattered all around. And nobody else knows what those kingdoms are, except their kings. How would anyone figure out the direction to go, if nobody is aware of the whole picture? People need to communicte somehow. And I was the one to get this information out of them.

Once I know the information - I can inform the rest of the kingdom. And then - we can make our decisions.

I can't exactly put my finger on why this is cool. But I keep getting compelled to establish paths of communication, softening the edges of the development process and filling in the gaps of unknowns. If not cool - it for sure is important. It has to be done.

How it improves you as a developer

It's very likely that my excitement didn't transfer onto you. I get it - planning, meetings, documentation, talking to people... - isn't fun. But let me tell you how it helps you to have such an experience.

In the work environment you will likely have a manager. You will have some kind of meetings, some kind of communication between other members.

If you ever have been in that role yourself - where you had to know what is happening, and you just had no idea. Here you have a deadline, and you're responsible that this team of cogwheels
deliver the product - how do you even figure out if you're doing well or are fucked? Now, suddenly, you need to know what is going on. You need cogwheels to communicate to you their progress, direction, problems. You need meetings.

As a developer it is easy to get annoyed with management. Sometimes because the management themselves actually suck. But sometimes - you really gotta understand the other side. Once you understand what they want to hear from you; once you've had experience of the one who's asking and getting unclear answers - you start to change the way you communicate yourself.

Let's make a web dev analogy, wee!

You have 2 application endpoints: front end and back end API. You need API calls to transfer information between them.

You can just dump everything in a JSON and forget about it. You can send just a single thing. You can make a request once a week, once a day, or every second. What, in what format, and how often do you actually send it? You need to know what the API is expecting. You need to know their data format. You need to know why API even asks you for data in the first place. You are the front end, and the manager is the API. Or - the other way around, if you'd prefer to be the back end. No matter in which environment - you need to know what both sides are sending and expecting. That is how communication works.

Communication is, truly, essential. As long as you have a team - you need it. Somewhere in your career you might want to advance onto higher positions. Those usually involve more management the higher you go. A senior developer. A system architect. A team lead. Higher knowledge, higher responsibility, higher salary. If you plan to grow - this is the path. And it requires to learn these skills of management, planning and communication. The better way to show your employees you are ready for a raise - prove you already have these skills. Show them you meet the requirements. And they will notice. Because other developers - won't.

And that is how the experience of being in a manager position improves you as a developer.

by Alex Romanova at Thu Oct 20 2022 21:43:55 GMT+0000 (Coordinated Universal Time)

Friday, October 14, 2022


Milton Paiva

How to make VIM Text Editor More Productive

Today reading the linux magazine and I found out a very nice article (https://www.linux-magazine.com/Issues/2021/251/Vim-Primping) about how to improve vim text editor.

After I start using Visual Studio Code I start missing some functions on vim such as auto indentation and the highlights of the corresponding brackets.

To make this happens you should edit a file .vimrc located in you /home/homedir/.vimrc (~/.vimrc) and add the following parameters:

edit the .vimrc in your home dir
add the parameters

After make the changes, you only need to edit the files and enjoy the improvements have fun!

by Milton Paiva at Fri Oct 14 2022 12:51:11 GMT+0000 (Coordinated Universal Time)

Saturday, October 8, 2022


Ray Gervais

Setting up a Local Developer Environment for K8s

The first in a series of kubernetes wisdom nuggets that I wish I had when I started working with Kubernetes in both a professional and hobby manner. All instructions are relative to Fedora and the Red Hat Enterprise Linux family using dnf, but should be applicable to all distributions and MacOS

Setting Up podman, containerd, go, and kind

On Fedora, podman is the default daemonless container engine choice for development and testing. For the curious, here are a few sites which break down some of the differences between Docker and Podman: Imaginary Cloud: Podman vs Docker.

In my case, installation is done through dnf on Fedora and quite simple. Note that for those who battle with muscle memory, the second package podman-docker when installed aliases docker to podman since their CLI APIs are the same.

❯ sudo dnf install -y podmam podman-docker

Dependencies resolved.
==========================================================================================================================================================================
Package                                      Architecture                     Version                                          Repository                           Size
==========================================================================================================================================================================
  Installing:
  podman                                       x86_64                           4:4.2.1-2.fc36                                   @updates                             41 M
  podman-docker                                noarch                           4:4.2.1-2.fc36                                   @updates                            7.2 k
  Installing dependencies:
  aardvark-dns                                 x86_64                           1.1.0-1.fc36                                     @updates                            3.1 M
  catatonit                                    x86_64                           0.1.7-5.fc36                                     @anaconda                           831 k
  conmon                                       x86_64                           2:2.1.4-2.fc36                                   @updates                            327 k
  container-selinux                            noarch                           2:2.190.0-1.fc36                                 @updates                             57 k
  containers-common                            noarch                           4:1-59.fc36                                      @updates                            111 k
  criu                                         x86_64                           3.17.1-2.fc36                                    @updates                            1.5 M
  criu-libs                                    x86_64                           3.17.1-2.fc36                                    @updates                             85 k
  crun                                         x86_64                           1.6-2.fc36                                       @updates                            430 k
  fuse-overlayfs                               x86_64                           1.9-1.fc36                                       @updates                            145 k
  libbsd                                       x86_64                           0.10.0-9.fc36                                    @anaconda                           344 k
  libnet                                       x86_64                           1.2-5.fc36                                       @anaconda                           128 k
  netavark                                     x86_64                           1.1.0-1.fc36                                     @updates                            8.7 M
  podman-gvproxy                               x86_64                           4:4.2.1-2.fc36                                   @updates                             11 M
  shadow-utils-subid                           x86_64                           2:4.11.1-4.fc36                                  @updates                             55 k
  slirp4netns                                  x86_64                           1.2.0-0.2.beta.0.fc36                            @anaconda                            89 k

Next, we’ll install Go if it’s not already found on the system since we’ll be using Go to install the latest kind binary. Installing and setting up kind is a easy two-liner assuming that podman was successfully installed and configured earlier, so I’ve bundled the commands below:

# Install Go
❯ sudo dnf install golang -y

Dependencies resolved.
==========================================================================================================================================================================
Package                               Architecture                          Version                                         Repository                              Size
==========================================================================================================================================================================
Installing:
  golang                                x86_64                                1.18.6-1.fc36                                   updates                                615 k

Transaction Summary
==========================================================================================================================================================================

  Total download size: 615 k
  Installed size: 7.5 M


# Install kind binary
❯ go install sigs.k8s.io/kind@latest

# Create a new cluster
❯ kind create cluster --name dev 

enabling experimental podman provider
Creating cluster "dev" ...
✓ Ensuring node image (kindest/node:v1.25.2) 🖼 
✓ Preparing nodes 📦  
✓ Writing configuration 📜 
✓ Starting control-plane 🕹️ 
✓ Installing CNI 🔌 
✓ Installing StorageClass 💾 
Set kubectl context to "kind-dev"
You can now use your cluster with:

kubectl cluster-info --context kind-dev

Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/

Just as the output recommends, let’s run kubectl cluster-info and validate that our local cluster is running

❯ kubectl cluster-info

Kubernetes control plane is running at https://127.0.0.1:35667
CoreDNS is running at https://127.0.0.1:35667/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

And for those curious, if we do podman ps we’ll see our node is a container!

❯ podman ps
CONTAINER ID  IMAGE                                                                                           COMMAND     CREATED         STATUS             PORTS                      NAMES
d68e23b76760  docker.io/kindest/node@sha256:9be91e9e9cdf116809841fc77ebdb8845443c4c72fe5218f3ae9eb57fdb4bace              29 seconds ago  Up 28 seconds ago  127.0.0.1:34647->6443/tcp  dev1-control-plane

Installing Your Custom Resources and Working In a Rapid Iteration Feedback Loop

Assuming you’re working with a kubernetes operator deriving from kubebuilder or operator-sdk, a Makefile should be present which makes installing the Custom Resource Definition as simple as make install. From there, you can see your CRD in the cluster using kubectl get crds -A:

❯ kubectl get crds -A
NAME                                       CREATED AT
weatherapis.batch.weather.raygervais.dev   2022-10-04T01:17:18Z

I’m using the weatherapi CRD which will be found in a different repo (//TODO) for this example, and the simple idea around it is that given a CR with valid latitude and longitude coordinates, it returns the current weather data for that location in the form of a resource status. Is this Idiomatic to Kubernetes? Probably not, but it works for an example of a Go operator interfacing with the outside world and allows me to experiment with concepts and patterns which have both been a blessing and curse for the past year of my career. I digress, back on topic.

With our CRD installed on the cluster, we have the potential to now run a debugging instance of our operator which is connected to the cluster. In most cases, I default to using VS Code for this, but if you purely just want to run your operator locally (outside the cluster) to monitor logs etc, the make run command will be your best friend.

❯ make run
go fmt ./...
go vet ./...
go run ./main.go
1.6652576181657362e+09	INFO	controller-runtime.metrics	Metrics server is starting to listen	{"addr": ":8080"}
1.6652576181662054e+09	INFO	setup	starting manager
1.6652576181663492e+09	INFO	Starting server	{"path": "/metrics", "kind": "metrics", "addr": "[::]:8080"}
1.665257618166365e+09	INFO	Starting server	{"kind": "health probe", "addr": "[::]:8081"}
1.6652576181664536e+09	INFO	Starting EventSource	{"controller": "weatherapi", "controllerGroup": "batch.weather.raygervais.dev", "controllerKind": "WeatherApi", "source": "kind source: *v1alpha1.WeatherApi"}
1.6652576181664743e+09	INFO	Starting EventSource	{"controller": "weatherapi", "controllerGroup": "batch.weather.raygervais.dev", "controllerKind": "WeatherApi", "source": "kind source: *v1alpha1.WeatherApi"}
1.665257618166484e+09	INFO	Starting Controller	{"controller": "weatherapi", "controllerGroup": "batch.weather.raygervais.dev", "controllerKind": "WeatherApi"}

Now, we have a rapid iteration feedback loop with minimal limitations. Depending on your own hardware, you could spin up your entire cluster deployment(s) locally, such as some would in a CI/CD pipeline for integration tests, or you could deploy ReplicaSets and various other items to test idempotentcy, load-balancing, etc. With our local cluster, we as developers have access to a dedicated environment which we can test our programs on, and also break without the fear of running into another developer’s testing instance. Most importantly, setup and tear down are stupidly simple and quite scriptable! More on that last point at the end. The real question is, how can we improve this setup for those who wish to interact with their cluster in a visual manner?

Kubernetes Dashboard

Dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources. You can use Dashboard to get an overview of applications running on your cluster, as well as for creating or modifying individual Kubernetes resources (such as Deployments, Jobs, DaemonSets, etc). For example, you can scale a Deployment, initiate a rolling update, restart a pod or deploy new applications using a deploy wizard.

Deploying the dashboard to our local cluster can be done with kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml, and we can confirm if it was successful with kubectl get deployments -n kuberenetes-dashboard

kubectl get deployments -n kubernetes-dashboard
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
dashboard-metrics-scraper   1/1     1            1           17s
kubernetes-dashboard        1/1     1            1           17s

To access the internal network (and thus, the dashboard), we need to run kubectl proxy in another terminal. Doing so exposes localhost:8001.

We can access the UI at http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/, and expect to see an authentication screen requesting a token or kubeconfig. In our case, we’ll create a sample ServiceAccount, as recommended by their documentation, and ClusterRoleBinding:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

Save the above as a file, and deploy using kubectl apply -f <FILE_NAME>! If successful, we can create a token using kubectl -n kubernetes-dashboard create token admin-user which will return a token we’ll use to sign into the dashboard as an admin.

I like to think it’s partially meta that the screenshot is of dashboard displaying it’s own deployment

There’s a few common items I use Kubernete’s Dashboard for, often because doing so is much easier via CLI commands:

  • Updating a resource (such as removing a finalizer during a failed deletion)
  • Reading the status & conditions, though kubectl get <resource name> -n <namespace> -w works incredibly if you need terse information.
  • Validating pod health & logs

Setup and Teardown

So, we’ve already seen the setup individually, but how could we script this?

# Install dependencies
sudo dnf install -y podmam podman-docker go

# Install kind
go install sigs.k8s.io/kind@latest

# Create new cluster
kind create cluster -n dev

# Validate
kubectl cluster-info

And of the tear down?

kind delete cluster -n dev`

Resources

by Ray Gervais at Sat Oct 08 2022 00:00:00 GMT+0000 (Coordinated Universal Time)

Monday, October 3, 2022


David Humphrey

On the value of small contributions

My open source students are beginning Hacktoberfest this week and already the DMs are coming in with a familiar question:

"Is this issue big enough? Is this too simple?"

I understand the question, since I explicitly ask that students work in real open source projects, avoiding those repos that appear suddenly every October and vanish just as quickly in November.  I want to see everyone working on valuable contributions to existing projects.  But how big should a contribution be?

My answer: as small as possible, while providing value.  Let me give you a few recent examples.

For the past month, our beloved blogging aggregator, Telescope, has been broken.  I asked my current students to use our new Seneca SSO based Sign-Up flow and create an account.  Later that same week, I was shocked to see that none of their blogs were showing up in the timeline.  I decided to look at the database to see what was going on (maybe no one actually signed-up?), only to be met by a set of angry 400 errors on the database's REST API.

I filed an issue and not long after, Kevan appeared with a fix that changed a single line!  The difference between me being able to properly administer the database and being locked out came down to updating 15 characters!

After shipping these 15 characters to production, I was now able to see what was happening in the database, and sure enough, the students had created accounts: the system just wasn't processing or showing their content for some reason.

Thankfully Roxanne appeared in the issue with some ideas and a willingness to turn her debugging prowess onto the problem.  Her research uncovered an issue with how new users were being processed by the system.

Telescope has three categories of users:

  1. Those grandfathered into Telescope from the Planet CDOT Feed List wiki page
  2. Those who have created accounts via our SSO based Sign-Up flow
  3. Those who have graduated from 1. to 2.

All of our testing last year focused on people in category 3, namely, those who were listed in the wiki page, but also created accounts for themselves.  In this case, we link the accounts in our backend, allowing an existing user to sign-up and re-take control of their account.

This fall, for the first time, all of our users are from category 2 (i.e., creating brand new accounts that don't exist in the wiki).  It's a scenario that should work (in theory!) but clearly doesn't in practice.  Our feed parser couldn't find a name for all the new users, so ignored their feeds.

The problem, once understood, was easy to fix: Roxanne made another 1-line change, affecting only 20 characters! Magic.

The Telescope project is about to turn 3 years old this month (it started in Oct 2019, remember that world?).  Since then, 229 contributors have written close to 25K lines of code.  The code is mature, and the architecture takes time to learn and understand.  I don't think any of the recent devs (myself included) knows all of it.

Getting involved with a medium-sized open source project like Telescope takes time.  You have to figure out where a bug is coming from, try and get things running locally, test your hypothesis, discover that you're wrong and try again, talk to people about what you learned, and repeat that process for hours, days, or weeks without giving up.  Often, like I've just shown you above, the actual fix is trivial, even obvious!  But only once you've figured out what's going on in the first place.

Lots of big things get brought down by tiny forces (try stepping on a single piece of lego in the night as you fumble around a room in the dark).  In the examples above, I showed you that 35 characters were all that it took to make 25K lines of code unusable.

Having a big impact isn't about writing a lot of complicated code.  Rather, real programming is doing exactly what's needed to make a system work, and nothing more.  If you can do it by changing a single line of code, you're a true master.  If you can do it by removing a single line of code, you're a wizard.

My advice to all my students this October who are struggling with imposter syndrome and feeling like their tiny contributions won't amount to anything: be like Kevan and Roxanne.

by David Humphrey at Mon Oct 03 2022 14:12:42 GMT+0000 (Coordinated Universal Time)

Friday, September 16, 2022


David Humphrey

Learning to live with failure

This week I've been talking with a number of students in my cloud computing course who all have a similar bug.  They are making network requests to a REST API endpoint, and instead of getting back the expected JSON result, they get an error about the JSON parser not being able to parse a string that begins with <!-- <!doc....  They keep pasting this error, wanting me to share their offence at its existence; but all I see is software doing what software always does, and ask them, "why is it happening? what is sending that HTML?"

One of the biggest challenges with teaching people how to program is that most people think about computer programs from a user's perspective vs. a developer's.  Let's define the difference like this:

  1. user: someone who spends most of their time using working programs
  2. developer: someone who spends most of their time using failing programs

When I'm writing software, I'm spending the majority of my time with a program that doesn't (yet) work.  If I'm lucky enough to finish it (whatever that means) and ship something, I'm basically done: a program ceases to be interesting to me once it's working.  Obviously I'll return to fix more bugs down the road, but in so doing, I'll re-enter this same phase of "working with a failing program."

A user of this same program I've just shipped will encounter it as a tool: something invisible in the process of getting work done.  If (or when) it ever fails, the program suddenly becomes visible again, and it needs to go back to the programmer to fix.

Using programs and developing programs require different ways of working.  Once you understand this, it's very hard to not always be programming; that is, at this stage of my career, I can't really use software anymore without also finding, filing, and fixing bugs.  Meanwhile, my current open source students are telling me that they can't find any bugs in their partner's code!

But back to the process of learning how to do this.  When you're starting out, the tendency to treat software you write as a program you'd use is strong, and it causes one to overlook lots of things.  It's easy to focus on "it doesn't work" vs. "here is what it is doing," which I've written about before.

Programming is fundamentally about being OK with programs not working.  It's the default way that we encounter code, and contrary to what you might think, "working code" is something of an exception.  All code is broken, or never worked, and learning to dwell in that knowledge and discomfort means that you'll get closer to the real work of programming.

You need to know that you don't suck, but your code does.  So does mine.  All code does.  It's all part of the process.

by David Humphrey at Fri Sep 16 2022 17:51:42 GMT+0000 (Coordinated Universal Time)

Tuesday, August 23, 2022


James Inkster

A Trip and Some Fishing.

I recently went on a trip to Halifax, Nova Scotia. For anyone that does not know where that is, it’s here. I had never been to the east coast of Canada, but I had been told the people were great, the lobster was perfect, and the air was cleaner than that of Ontario. Also for the sake of clarification, when I mention technology in this post, I’m referring to cell phones, IoT, computers, databases, etc. Basically anything with some kind of electronic pulse too it, that might not be the right term, but I watch a lot of Hollywood movies where they use it incorrectly, so I’ll go with it.

I’m not a big sea food guy on my own, don’t get me wrong I love seafood. But it’s not as easy to cook and figure out when its done like the main-stays like chicken and beef. My first instance, was actually having a full steamed lobster. With no idea how to even begin eating this or taking it apart you kind of just jump in. You have to work for your dinner. It was incredibly fun, and felt rewarding. Its fun the difficulty part, and you can see with my crooked face that I’m having the best time. The convenience of uber back home and it delivered all ready to go just is not as enjoyable its convenient and easy, but not nearly as fun. I would eat lobster more often for the difficulty and the happiness it provides. (The cost however would probably bar me from doing that.)

About to dive in.

It is very small town living in NS, people like their simple lives out there. They would spend days fishing on a boat, or walking along the Halifax pier, or enjoying the gardens of the city. It was interesting, as I saw less people on there phones, mostly just tourists, rarely did I see a local with their head looking to there phone. Perhaps its because most tourists use it for google maps out of fear of getting lost, or need to capture specific moments, and locals do not need this service. However, when I’m in Ontario, even the locals spend a lot of time looking at their phones.

It’s interesting to me how little a lot of them interacted with cell phones which have become a staple in modern day life in Ontario. I enjoyed it, everyone gave their full attention to you when you were speaking to them. You never felt rushed in a conversation. And everyone was super friendly.



During the trip, we had booked a boating tour online. We had scheduled a fishing boat tour. They were one of the few that had opened slots. 7 AM. Okay, it’s going to be a super early morning for a vacation, but to be out on the water at that time is amazing. It’s so peaceful.

Picture I took of Peggy’s cove..at 6 am.

Well it started off rough. They had quadruple booked the time slot. A boat that can carry 6 people…had about 18 people show up. There was 4 groups of people. 6,5,4, and 2.
There first comment was “Oh the website must have screwed up”, then it was “This happened last week too.”, and finally “I just drive the boat.”

You could tell these were not the people who scheduled it, and was actually run by someone not there. Apparently they have had multiple problems with the 7 am slot. They had already taken all our money, and well the owner was impossible to get a hold of on how to rectify the situation.

This is exactly what I expect when you have someone whose not technology savvy to try to implement technology in there businesses. It also made me realize there probably wasn’t a lot of technically savvy people in the area either. My girlfriend instantly was like “Why don’t they have it set up so they cross check the database to see if there is a booking at that time” or something along the lines of that. Which I had to remind her, a lot of these individuals do not know technology or use technology at all. They acquire it, and just pray it works. No matter how simple it is. They also have a limitation of the network of people they would have access too to help them through any issues they run into.

We eventually got on the boat…how? Well, the group of 4 actually ended up being chosen to go, and since we were a group of 2, we could fit on the boat with them, no problem. Was it luck? Was it skill? It was skill if anyone asks, but truthfully it was luck.

I did however get to do some deep sea fishing!


But this earlier problem really started bothering me. How do you get people to use technology to make their life happier and easier, when they are already living a relatively easy life (Not about social challenges etc, but more about pacing of their lives)? They did not really have a need for technology, and in this particular instance it made their life more difficult than if they just had a first-come first serve, or register in person requirement.

How would technology have to be designed differently to accommodate the people who just did not embrace using technology like those that have. It made me really think about the inevitably of technology. And in the last month, I’ve actually started to change my opinion, I don’t think technology and people being involved in technology is inevitable. Technology’s purpose to me is to bring happiness to peoples lives, and I think so many Nova Scotians’ already seemed to have all the happiness, with there small town life.

I feel this trip reminded me of the importance of technology, and I kind of felt the need to share and remind people about this. Technology is supposed to improve our lives. And its very easy to let it consume our lives. When we code and do things, it needs to be with purpose. And the technology that makes things easier does not always makes us happier.




by James Inkster at Tue Aug 23 2022 17:43:40 GMT+0000 (Coordinated Universal Time)

Tuesday, August 9, 2022


Cindy Le

Configure third-party domain name and HTTPS for CloudFront distribution

Part 3/n of the Cloud Resume Challenge

Chunk 1: Building the frontend

After you have successfully used CloudFront to serve the Next.js website hosted on Amazon S3, you'll notice that the website https://d1ij0wngzhbeyc.cloudfront.net/ is using HTTPS but what if you wanted to use another domain name like https://cindyle.dev.

Stack:

  • Next.js
  • Amazon S3
  • Amazon CloudFront
  • Third-party Domain (ie Namecheap)
  • AWS Certificate Manager
  • Route 53

Setting up the Namecheap DNS

You can purchase a domain name from Route 53, but while I was playing around with this Chunk, I bought a domain name from Namecheap.

  1. Purchase a domain name from Namecheap or any other domain registrar
  2. To add an alternative domain name (CNAME) to a CloudFront distribution, you must attach a trusted certificate that validates your authorization to use the domain name.
  3. Head over to the AWS Certificate Manager console and Request a certificate
    • Certificate type: Request a public certificate
    • Fully qualified domain name: cindyle.dev
    • Select validation method: DNS validation - recommended
  4. After you send the request, you should see that the Status says Pending Validation
  5. Click on the Certificate ID, and you should see the CNAME name and CNAME value
  6. Over in your Namecheap console, in the Domain tab, make sure you have
    • Nameservers: Namecheap Basic DNS
  7. Go to the Advanced DNS tab, select ADD NEW RECORD
  8. In the AWS Certificate Manager, wait for the Status to go from Pending validation to Success. This could take hours but setting the TTL to 5 minutes should only take 10-15 minutes.
  9. Once the certificate is validated, you don't need to keep the record anymore on Namecheap, so delete it.

Add an Alternative domain name (CNAME) to CloudFront

  1. In the Amazon CloudFront console, select the distribution you created previously
  2. Click on Edit and under Alternative domain name (CNAME) - optional, click on Add item
    • Alternative domain name: cindyle.dev
    • Custom SSL certificate: cindyle.dev (3ff2abcd-12ac-1234-a000-9jklfdsirba) from the dropdown menu
  3. Save the changes and wait for the distribution to finish deploying

Use Route 53 as the DNS Service to manage your DNS records

  1. In the Route 53 Console, create a new Hosted Zone
  2. Click on the Hosted zone details, and you should see four servers listed underneath Name servers
  3. Over in the Namecheap console, in the Domain tab, make sure you have Nameserver: Custom DNS and enter the four name servers from your Route 53 Hosted Zone details
  4. Back in the Route 53 console, create a new Record
    • Record name: cindyle.dev (keep blank to create a record for the root domain)
    • Type: A - Routes traffic to an IPv4 address and some AWS resources
    • Alias (turn on)
    • Route traffic to: Alias to CloudFront distribution
    • Dropdown: d1ij0wngzhbeyc.cloudfront.net
  5. Create another new Record
    • Record name: cindyle.dev (keep blank to create a record for the root domain)
    • Type: AAAA - Routes traffic to an IPv6 address and some AWS resources
    • Alias (turn on)
    • Route traffic to: Alias to CloudFront distribution
    • Dropdown: d1ij0wngzhbeyc.cloudfront.net
  6. You should see that https://cindyle.dev is live

by Cindy Le at Tue Aug 09 2022 16:03:00 GMT+0000 (Coordinated Universal Time)

Monday, August 8, 2022


Cindy Le

Use CloudFront to serve a static website hosted on Amazon S3

Part 2/n of the Cloud Resume Challenge

Chunk 1: Building the frontend

After configuring the Next.js website on S3, you'll notice that the website endpoint http://cindy-crc.s3-website-us-east-1.amazonaws.com/ is HTTP. This is because the website URL should use HTTPS for security, and this is where we'll need to use CloudFront.

Stack:

  • Next.js
  • Amazon S3
  • Amazon CloudFront

As you'll remember from the previous blog, you'll need to remove any Bucket policy attached to the S3 bucket and Block all public access to continue with this.

We will be using a REST API endpoint as the origin, with access restricted by an Origin Access Identity (OAI).

Create the CloudFront Distribution

I mostly followed Use CloudFront to serve a static website hosted on Amazon S3

  1. In the Amazon CloudFront console, choose Create distribution
    • Origin domain: cindy-crc.s3.us-east-1.amazonaws.com (from the dropdown menu)
    • Name: cindy-crc.s3.us-east-1.amazonaws.com (should be autofilled)
    • S3 bucket access: Yes use OAI (bucket can restrict access to only CloudFront)
    • For Origin access identity, select Create new OAI and choose the newly created OAI
    • Bucket policy: Yes, update the bucket policy
    • Default root object - optional: index.html
  2. Click Create distribution and wait for the Last modified status to go from Deploying to Date. This may take ~5 minutes
  3. You should be able to visit the Distribution domain name link https://d1ij0wngzhbeyc.cloudfront.net and see the resume website

Troubleshooting

If you're getting an Access Denied error, it's probably due to Bucket policy. You should be blocking all public access and only allowing your CloudFront Distribution to access your S3. The S3 bucket policy should look like

{
    "Version": "2008-10-17",
    "Id": "PolicyForCloudFrontPrivateContent",
    "Statement": [
        {
            "Sid": "1",
            "Effect": "Allow",
            "Principal": {
-                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity Distribution-ID"
+                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E392Q20ZNZ7N4X"
            },
            "Action": "s3:GetObject",
-            "Resource": "arn:aws:s3:::Bucket-Name/*"
+            "Resource": "arn:aws:s3:::cindy-crc/*"
        }
    ]
}

by Cindy Le at Mon Aug 08 2022 19:38:00 GMT+0000 (Coordinated Universal Time)

Friday, August 5, 2022


Cindy Le

Configure a static Next.js website on Amazon S3

This is the start of a multi-blog series as I make my way through the Cloud Resume Challenge. As advertised, the Cloud Resume Challenge is a popular 16-step project that takes you from certified to hired in cloud.

I already have a job and have passed two AWS exams. Still, a couple of months ago, I ventured off to see what's out there, and oh boy, the number of cloud questions I could not answer at the interviews made me really think, "I don't know my stuff. I owe it to the people involved to do a really good job here". The worst part is I didn't blindly send out resumes and hope these companies would call me; I reached out to my connections and asked them to put my name out there. Some of which returned to me with opportunities that have not been made public yet. I underperformed in these interviews, but I now know what these companies are looking for.

Anyway, I don't want to ramble on about how I replayed these interviews in my head because this is about what I'm going to do moving forward to make myself a better candidate, so whenever I decide to venture off again, I'll be prepared!

Chunk 1: Building the frontend

The minimum requirement for this part of Chunk 1 is to have an HTML page with some CSS and serve the files in an Amazon S3 bucket.

It sounds simple enough, but since I love frontend development, so I decided to add the Developer Mod. I'll build my website using a JavaScript framework instead of just HTML and CSS.

Stack:

  • Next.js
  • Amazon S3

Getting Started with Next.js

I made a GitHub repo here if you wanna see what I have so far. For this part of Chunk 1, you really just need to have a Next.js app, then build and export it.

  1. Create a new Next.js app using create-next-app. We have been using pnpm on Telescope, and I simply cannot go back to npm.
pnpm create next-app
  1. Run pnpm run dev to start the development server on http://localhost:3000 and view your initial Next.js application.

  2. Update the build script in the package.json file.

  "scripts": {
    ...
-    "build": "next build",
+    "build": "next build && next export",
    ...
  },
  1. Run pnpm run build will generate an out directory with everything you need for your static website.
cloud-resume-challenge
  ├── .next
  ├── node_modules
+ ├── out
+ │   ├── _next
+ │   ├── favicon.ico
+ │   ├── index.html
+ │   └── vercel.svg
  ├── pages
  ├── public
  ├── styles
  ├── .eslintrc.json
  ├── .gitignore
  ├── next.config.js
  ├── package.json
  ├── pnpm-lock.yaml
  └── README.md

Host the Next.js website on Amazon S3

I mostly followed Tutorial: Configuring a static website on Amazon S3 for this part. However, to proceed with the CloudFront Distribution in a later blog post, you will need to Block all public access for the S3 bucket.

  1. In the Amazon S3 console, choose Create bucket and enter the following:
    • Bucket name: cindy-crc
    • AWS Region: US East (N. Virginia) us-east-1
    • Uncheck Block all public access
    • Bucket Versioning: Enable
  2. Click on Create
  3. Once the bucket as been created, upload everything from your out folder into the bucket
  4. Click on your newly created bucket from the list and go to the Properties tab
  5. Go all the way to the bottom, under Static website hosting and Enable Static website hosting
  6. Specify the Index document as index.html
  7. Save your changes, and you should be able to see a Bucket website endpoint like http://cindy-crc.s3-website-us-east-1.amazonaws.com/
  8. In the Permissions tab of your bucket, add a new Bucket policy and save the changes
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
-                "arn:aws:s3:::Bucket-Name/*"
+                "arn:aws:s3:::cindy-crc/*"
            ]
        }
    ]
}
  1. Test your website endpoint by visiting http://cindy-crc.s3-website-us-east-1.amazonaws.com/

  2. You can stop here if you're satisfied with having a static website on Amazon S3, but for the purpose of the Cloud Resume Challenge, which will move on to using CloudFront, you need to remove the Bucket policy and Block all public access again.

by Cindy Le at Fri Aug 05 2022 19:22:00 GMT+0000 (Coordinated Universal Time)

Wednesday, August 3, 2022


Jennifer Croft

The Climb: Software Developer Journey

School…… it truly is a beast onto its own. I heard that Seneca college has a great reputation of producing good software developers. It wasn’t even a graduate of Seneca that told me this. Being in my 40s, meant I had to study a little harder than most but “tenacious” is my middle name. Haha! Seriously though it was rough and becoming a developer. Definitely not for the faint hearted. It’s best not to be intimidated though. Face the challenges head on. Building a foundation for Software development required experiences that take you way out of your comfort zone. For someone who is new or considering Software development as a career, I have some recommendations that will make the journey a little less painful.

For starters you should probably get acquainted with basic programming. Learning the basic theory will help develop some understanding as you learn more difficult practices later. You have to learn how to think like a programmer. Try not to sweat the small stuff because it WILL come to you. Remember to be patient with yourself and don’t rush progress. Free courses and tutorials like w3schools and more will help you understand things like Encapsulation, Polymorphism and Abstraction. Those 1% of you who naturally get 99% in your programming courses… You are so lucky!! Remember that it’s better to try for those high grades. If you do badly on a subject, take it again. Retaining that foundational knowledge is your key to success.

This field is a vast range of options when it comes to jobs. There are easy jobs that require teams to work together and there are jobs that require a lot of struggles with strict deadlines. Finding what is right for you is up to the individual and what your goals are in your career. Keep an open mind about everything because you never know where your interests may end up. I know this from experience. In my first year, I didn’t want to touch web programming in JavaScript. Absolutely hated JavaScript. Now my preferred languages are C# in .Net Frameworks and JavaScript!

After everything is over, and I evaluated my experience. I’d say it was worth it! The struggles, the victories are minor compared to the sense of accomplishment gained. I can’t speak for everyone, but the experience gave me some personal growth. Time well spent! Also leads to a great career! So be brave and reach for the stars….. I mean computer!

by Jennifer Croft at Wed Aug 03 2022 03:33:00 GMT+0000 (Coordinated Universal Time)

Monday, July 25, 2022


Alex Romanova

Hacktoberfest preparations

About Hacktoberfest

Hacktoberfest is a month long event on Github that encourages contributing to open source projects. It happens every year during October. If you complete at least 4 approved pull requests - you get sent merch!

Read more here about the 2021 Hacktoberfest.

For contributors

Many people aren't familiar with open source, so this becomes a great introduction. There is a specific label "good first issue" that is targeted towards newcomers. Either people who are new to programming, new to github, new to contributing to open source.. All of those apply. Issues of this label tend to be simple and don't have to be coding related. They could be text fixes in documentation, simple changes in color, translations... The experience of completing a "good first issue" lets people to get used to the whole git system, improves their confidence and gets them introduced to various new and interesting projects they otherwise would never find.

My experience as a contributor was hectic. I struggled to find appropriate for my skill level issues and I could never be sure if I can solve them in time, or they are too easy to count. Many projects lacked guidance. Most problems I had were with starting out - not even in fixing an actual coding problem.

Therefore, I decided to fix this issue for those like me. After my experience with Telescope, I have started seeing it as my project. I am a part of this now, and I am responsible for its growth to some degree.

For maintainers

As a repository owner, you are required to prepare issues for contributors. You need to make sure a newcomer can understand what is required with ease so that they focus on the actual implementation.

Once the issue is ready, maintainers need to assign "Hacktoberfest" label to it. Answer questions of contributors that find this issue and are trying to solve it. Assign it to those who wants to solve it. When they have made their pull request - review it and, if all is good, merge. Add the "hacktoberfest-accepted" label in the end to make sure the contributor has this issue count towards their progress.

Now on the other side!

Now that I'm familiar with Telescope, I can try attending hacktoberfest from a maintainer side. It will be a different experience, but now I know what to do, as I have already been in the contributor role. I will be also trying to do the same for SevenTV, which doesn't use github issues for their task tracking. So I would have to make issues myself and do their maintenance.

Telescope progress

Isn't this kind of early...? Yes. But there are many issues and I know myself well. I better at least start this, I know I won't finish it any time soon. Hopefully, if I line out the structure and a direction, people will follow. I'm not the only maintainer of this, so I want us to cooperate and be ready when October comes.

Actual steps

Firstly, I have started a project to track the issues preparation.


Please make sure to read the README of it (if you are a maintainer). If not - I will show you what is there now.

Of course, by the time you are reading this, it might change. Hovewer, the general idea/outline is there.

Once Hacktoberfest comes, we will have the list of issues already. We would only need to assign the label to them.

Additionally

I thought I might as well make an introductory video for hacktoberfest participants. If you read my past posts, I had mentioned something called "issue seller" before. Hacktoberfest would be a great opportunity to try out this format.

It is also a good experience for me to get familiar with some issues/parts of the project I never dealt with before. Before I introduce them to others, I should know enough about them myself.

by Alex Romanova at Mon Jul 25 2022 02:27:00 GMT+0000 (Coordinated Universal Time)

Wednesday, July 6, 2022


Abdulbasid Guled

Appwrite: Get it cause we write Apps

Let's pretend that today's my birthday and completely ignore what I did today to focus on an intriguing framework I came across recently, Appwrite. It's suppose to be an alternative to Supabase as another open source framework to setup a secure backend for you. It includes all the usual stuff such as:

  • Database
  • Authentication
  • Storage
  • Serverless functions
  • Security

It even includes the ability to get GEO related user data, something I've been wanting to work with for some time.

Having worked with Supabase on personal stuff, and not getting to work with JS recently as my internship mainly uses Java, this really caught my eye. You can read more about Appwrite here but I want to focus on the simplicity of getting it setup and using it.

The service is self-hosted, something we can take advantage of using Docker (which I recommend any developer learn even if you won't ever use it, it's an amazing tool) and as for setting it up to run, even easier. Here's the example code shown on their site:

// Init your Web SDK
const client = new Client();
client
    .setEndpoint('http://localhost/v1')
    .setProject('455x34dfkj')

// Register User
const account = new Account(client);
account.create('unique()', 'me@example.com', 'password', 'Jane Doe')
        .then(response => {
            console.log(response);
        }, error => {
            console.log(error);
        });

As you can see, we just setup the client object, then we can do whatever we want. Whether it's creating a new account, a new session, or even a new JWT token, it's all done here. It even can connect with your preferred oAuth of your choice which is amazing.
Their README and documentation is amazing by the way. See what I mean here

While geared mainly to mobile developers (Flutter specifically targeted), any web developer can also taken advantage of Appwrite to create a secure backend for their purposes. Really makes it an easy decision to focus on the frontend. Now, services like this and Supabase are generally made with the intention of hiding the complexity of creating a backend service so you miss out on learning exactly how web servers are made. If you know how they're generally operated however, this service enhances that developer experience tenfolds. I plan on trying this out on a future project whenever I get the motivation to get back into web development. In the meantime, give it a try. You won't be disappointed.

by Abdulbasid Guled at Wed Jul 06 2022 03:44:30 GMT+0000 (Coordinated Universal Time)

Monday, June 27, 2022


Tue Nguyen

Socket event listener fires multiple times in React component

Here is something I've learned doing my side project that I think it's fundamental but also often overlooked.

Some context

The project simply contains 2 major parts:

  • Front-end: React and Materials UI
  • Back-end: REST API using Express, Typescript, Firebase and Socket.io

My goal was to inform the front-end when data is updated using Socket.io so that it would try to fetch data again.

Incorrect attempts

I had these incorrect code snippets that listen to socket event and fetch data again

My first attempt was this, whenever there's any re-render, socket.on('USER_JOINED') would registers the same callback function passed into it so once the event USER_JOINED is emitted, the same callback will fire multiple times while we only need it to execute once to reduce api calls and performance obviously.

const Expense = () => {
  const [initialValues, setInitialValues] = useState(null);
  const [expense, setExpense] = useState(null);
  const { socket, toLogIn } = useContext(SWContext);
  // ...
  socket.on('USER_JOINED', (socketData) => {
    // fetch data again
  });
}

My second attempt was to register the callback once when the component is mounted but I still experienced multiple callback executions. It's because even after the component is unmounted, the callback is still registered with socket (I use one single instance of socket (Singleton pattern)). And I wouldn't have access to new state if the state was updated.

  useEffect(() => {
    socket.once('USER_JOINED', (socketData) => {
      // fetch data again
    });
  }, []);

Solution

This is what works for me so far. I register a socket event handler and a clean up socket.off every time expense changes. This way there's only one socket event handler being called at a time and detached when not needed

  useEffect(() => {
    socket.once('USER_JOINED', (socketData) => {
      // fetch data again
    });
    return socket.off('USER_JOINED');
  }, [expense]);

I imagine this practice can also apply for similar situations like window.addeventlistener()

Take a look at my project if you're curious https://github.com/TueeNguyen/SplitWise3

by Tue Nguyen at Mon Jun 27 2022 03:53:36 GMT+0000 (Coordinated Universal Time)

Tuesday, June 14, 2022


Ray Gervais

Writing an Awesome List using the GitHub API

One morning, I decided that I wanted to highlight the importance of a project claiming to be blazingly fast

A recent trend you may have noticed on GitHub and other Git-based platforms is the concept of the [awesome-list](https://github.com/topics/awesome-list), and being the cynical one that I can be at times, I wanted to create a list to announce a very important aspect of open source software development: is your program blazingly fast. So, I decided to do exactly that: https://github.com/raygervais/awesome-blazingly-fast

The rest of this article goes over the process and thoughts I had as I implemented a few small script-like programs which would help me with this quest, so let’s dive into it.

Retrieving The Datasets using the GitHub API

GitHub provides a free API which allows for pulling of various queries etc with ease; as long as you don’t mind being rate-limited. The limit is 10 queries for non-authenticated API calls, and 30 for those who are using an Application Token. While I was writing out the initial code, being limited to only a few calls didn’t impact the process aside from forcing me to add some error handling logic for when the API responses started to return non-200 statuses:

if not os.path.exists(path):
	print(
		f"Getting page for language {language} from {year} with query: 'created:>={year}-01-01..{int(year)+1}-01-01 language:{language} {search_query}'"
	)
	repos = get_repositories(
		f"created:{year}-01-01..{year}-12-31 language:{language} {search_query}"
	)

	if repos == None:
		print("GitHub Ratelimit exceeded")
		exit(0)

To be kind to both the API and my own logic, I opted to cache the retrieved datasets. The first implementation (which I believe is commented out in the Python API datagen logic) pulled the first 1000 repos per language. GitHub’s restriction to only return 1000 results regardless of being authenticated or not made the dataset and the associated list look anemic compared to other awesome-lists, and I was certain there would be more repos vs what the original dataset implied. So, I rewrote the query and caching logic to query for all repos with the search word blazingly fast by every year, and every quarter if the dataset was big enough to be broken down. This allowed me to retrieve over 1000 repos of JavaScript for example which were created between 2020-01-01 and 2020-03-01, and so on. Because of the caching, this also meant that to refresh a newer dataset, it would be best to delete the old first. With that change, our datasets became much more impressive and expansive, so now we have to parse and export our data to our [README.md](http://README.md) to complete the project!

Parsing an Awesome List

Originally, I wrote this entire little project in Python since that was the quickest means-to-an-end, and also to experiment with the data structures at the start. Though the code worked, it really bugged me how bad I wrote it. While wearing the smartest thinking-cap that I could find, I opted to do what would be the most logical approach: rewrite it in Rust.

There’s a few reasons I opted for Rust:

  1. I write a lot of Go for work, so I wanted to contrast the experience and design principals between the two purely as an experiment. Recently, I’ve also been running into some post-honeymoon limitations of Go which I’m sure isn’t helped by my lack of enterprise-level experience with the language.
  2. How can you develop a non-blazingly-fast parser for an awesome-list about blazingly fast repositories?
  3. I hadn’t suffered enough headaches that week.

The implementation started off smoothly, the cargo ecosystem and rust compiler make getting setup a breeze on Fedora, but the seconds after tell a different headache. See, in retrospect I can admit it was my lack of patience and wanting to get the program finished which made the overall process lackluster:

  • The moment I started writing code to iterate over the dataset directory, the borrow checker decided to bully me over a single string (the file name).
  • An hour or few later, and I was now being interrogated for following the serade_json examples for unmarshalling the json datasets. Not a problem, just a small integration lesson that was easier to resolve than the first.
  • By the next weekend when I resumed development, no matter how much Googling I did, I couldn’t figure out how to properly append to a mutable vector the various items.

Most of the problems I encountered derived from my lack of experience using Rust, and more so I imagine my design didn’t jive with how Rust applications are supposed to be written. It was this experience which actually has me considering printing out the Rust Book and going over it lesson by lesson. Still, my drive to finish the project led me down a different path: rewrite in Go. Originally, I wanted to experiment with Rust, but I can accept when I’ve lost interest in pushing forward with a specific goal when similar ones can be implemented instead; for those curious I’ve included a snippet of the (terrible) Rust code that was writing.

fn main() {
    let mut languages: HashMap<String, Vec<Item>> = HashMap::new();

    for file in fs::read_dir("./data").unwrap() {
        let path: path::PathBuf = file.unwrap().path();
        let language = extract_language_from_path(path.clone());

        let dataset = transform_data_to_objects(path);

        if !languages.contains_key(&language) {
            languages
                .insert(
                    language.clone(),
                    dataset.expect("failed to load dataset").items,
                )
                .expect("failed to enter new language into hashmap");
        } else {
            languages
                .get_key_value(&language)
                .expect("failed to retrieve key")
                .1
                .append(&mut dataset.expect("failed to load dataset").items)
        }

        for language in languages {
            println!("{} {}", language.0, language.1.capacity())
        }
    }
}

// returns the language using string parsing, given that all files are formated
// as `languages-repositories-<LANG>-<YEAR>.json`
fn extract_language_from_path(path: path::PathBuf) -> String {
    path.to_str()
        .expect("failed to convert path to string")
        .trim()
        .split("-")
        .nth(2)
        .unwrap()
        .to_string()
}

fn transform_data_to_objects(
    path: path::PathBuf,
) -> Result<LanguageResponse, serde_path_to_error::Error<Error>> {
    let deserializer = &mut serde_json::Deserializer::from_reader(
        fs::File::open(path).expect("failed to open file"),
    );

    serde_path_to_error::deserialize(deserializer)
}

I won’t go over the Go implementation since I’m saving that for another blog post, but I will conclude this post with the following: I stopped writing the program in Rust purely because of my own faults, not the language itself. I still look forward to learning and using it in the future, but the timeframe and expectations of this project didn’t align with my original idea to use Rust; simply put Rust demands you write good code, a task of which I’m still learning to do.

So with that being said, did your repo make the list?

Resources

by Ray Gervais at Tue Jun 14 2022 00:00:00 GMT+0000 (Coordinated Universal Time)

An Introduction to Git Worktrees

For when experience leads you having a PTSD-like relationship with the command git stash

Let me give you a scenario, and I’ll let you tell me if it’s one which you’ve encountered before. Simple deal I think, so let’s get on with painting a picture:

It’s late on a Monday night, you’re in the zone working on a pivotal feature for the sprint which touches ten files to handle parsing a new data format for your ingestion platform. It’s all making sense. This will resolve the weekly data upload error that your QA has been harping about; they mean well you know. You mistakenly check Slack after hearing a notification, dragging you out of your 1s and 0s. It appears there’s a critical bug which took out the latest production instance, which was built from release-01.22-stable, a branch that isn’t as stable as the team hoped it would be. They irony doesn’t escape you, but no time to laugh now.

In this scenario, what would you do if you were the developer who had to investigate the bug in the code? More so, how would you approach your investigation? For many, including myself, it would look something like this:

> git reset src/main.rs lib/parser.rs 
> git stash -m "feat-ingest: please remember where you left off, modifying file ingest.rs"
> git fetch
> git checkout origin/release-01.22-stable

It’s a Monday, of course this would happen. After a few hours debugging and reproducing the issue, you implement a fix which passes all Pipeline tests and fixes the main problem -a band-aid this fix cannot be. The rest of the dev team is joining you online and reviewing the fix; WTfs can be heard over the huddle. pic related.

Let’s depart from reality and assume that the fix was merged within the first commit, no revisions were needed. You unleashed your inner 10x developer for the first time, but be careful with using such power. You return to your other branch:

git checkout feat-ingest
git stash pop
cargo build

Only, it doesn’t build.

After some battling with your memory and recent actions, you try to remember what changes are needed to the now-reset files to get you back into the build-test-build flow you started in. After 20 minutes or so, you’re back to where you previously before you put on the firefighter hat.

Surely there must be a better process?

The Better Approach!

What if I mentioned there was a better approach? Enter the git worktree, which enables you to have work with as many branches and their unique setups as you’d want, all without any additional tools or dependencies! For the TLDR, git worktrees allow for folders to be created which associate to a specific branch, so you could have master, feat-ingest and release-01.22-stable all checked out locally for when they’re needed. If you’re curious, keep reading oh adventurer, and we’ll explore git worktrees together.

I first discovered the concept of Git Worktrees while watching ThePrimeagen’s explanation, explaining how all this time he and software developers in whole were approaching how we use git wrong. I, being curious and also easily distracted by new and shiny, decided to try using worktrees on a few personal projects before I’d take it to the team at work. Here’s a rundown of common commands, and how worktrees work.

Opensource.com describes Git Worktrees as,

A Git worktree is a linked copy of your Git repository, allowing you to have multiple branches checked out at a time. A worktree has a separate path from your main working copy, but it can be in a different state and on a different branch. The advantage of a new worktree in Git is that you can make a change unrelated to your current task, commit the change, and then merge it at a later date, all without disturbing your current work environment.

But, what does that look like?

# clone the `bare` repository
git clone --bare <URL> 
cd <project_name>

# create worktrees for any branches you want locally
git worktree add master
git worktree add feat-ingest

# move to your development branch
cd feat-ingest

# do your development magic!
emacs .

Why the --bare argument

git-scm describes the bare argument as,

Make a bare Git repository. That is, instead of creating <directory> and placing the administrative files in <directory>/.git, make the <directory> itself the $GIT_DIR. This obviously implies the --no-checkout because there is nowhere to check out the working tree. Also the branch heads at the remote are copied directly to corresponding local branch heads, without mapping them to refs/remotes/origin/. When this option is used, neither remote-tracking branches nor the related configuration variables are created.

So, how could you utilize this in the scenario we started this post with?

# from the root of the cloned repository
git worktree add release-1.22-stable

# open up the branch in your editor
code release-1.22-stable

# do your fixes, hacker voodo, commit the changes
...

# now go back to your flow!
code feat-ingest

Because branches exist as folders within your repository, it’s as easy as switching folders when you want to change your current working branch. Worktrees come with other useful helper commands as well, which opensource.com explains better in the section below.

Listing of Active Worktrees

git worktree list
/home/ray/code/myproject  15fca84 [dev]
/home/ray/code/hotfix     09e585d [master]

Removing Worktrees

$ git worktree remove hotfix
$ git worktree list
/home/ray/code/myproject  15fca84 [dev]

Moving the location of a Worktree

$ mkdir ~/Temp
$ git worktree move hotfix ~/Temp
$ git worktree list
/home/seth/code/myproject  15fca84 [dev]
/home/seth/Temp/hotfix     09e585d [master]

Common Usecases Improved by Worktrees

Andrew Lock’s post describes some common usecases, which realistically imply worktrees being a replacement for git checkout in most scenarios where you’re interacting with multiple branches:

  • Debugging a collegues branch
  • Bug fixes
  • Working on multiple tickets in parallel on separate branches

Resources

by Ray Gervais at Tue Jun 14 2022 00:00:00 GMT+0000 (Coordinated Universal Time)

Thursday, May 26, 2022


Loran undefined

Internal dialogue of a procrastinator

R: rational part I: Instant gratification part

R: you should really start working on this homework assignment, it will take you couple hours at most I: it is ok, we can do it tomorrow morning. Let's set up the alarm really early. I am so sleepy now after all, and tomorrow we will have some coffee. R: But if we wake up 2h before the deadline, it is too early for us! we are not going to have enough sleep! I: Okay, maybe we should start. But wait, let's first look at our news feed. It would not take too much time, but we will feel so much better. R: I don't know... Are you sure we are not going to get stuck again? I: Absolutely not! It is not that much to reed, we read most of it today already. R: Sounds reasonable. Okay. ... 20 minutes later.

R: I am closing this now! Let's open the slides. I: okay R: Why are we lying on the table eyes closed? I: I am so sleepy and tired. Maybe re-consider doing it in the morning? Look, if we go to sleep now, then waking up early, you still would get your 7-8 hours of sleep. R: You know how it will go. We would fall asleep at our usual time any way. Imagine how tired you will be in the morning! I: Not if we have coffee! R: Look, we should at least start tonight. I: okay. Also, I think we really need to make a blog post called "internal dialogue of a procrastinator". Look, it is very important that we do it now, because it is now or never. Remember how you always wanted to have a blog? R: Agreed, sounds reasonable. I guess if we have some time left after that, we could look at the homework. But I will set the alarm for tomorrow. ... 20 minutes later.

I: finishing the blog post! Told you it was going to be fast. R: Cool, we can skill do some homework before going to bed. I: sure. But your friend here needs you to watch this video. And you better do it, you can't tell them you are busy, they can see you not working on your homework. R: Okay. Remind me, why our priorities are like that? Why can't we be like normal people, who do important things first? Now I will have to punish you with feeling bad for not starting early. I: Oh no. What's the point of that? I thought we were friends? R: Sorry, it is in your.. no in our best interest. Next time you remember how bad it was, so you would have a bit more motivation to start earlier. I: If only this would have ever worked. I: At-least now we agree, that we are postponing everything towards morning.

by Loran undefined at Thu May 26 2022 03:48:54 GMT+0000 (Coordinated Universal Time)

Sunday, May 22, 2022


Ray Gervais

Setting up Fedora 36 From Scratch

For the first time in years, I managed to break my Fedora installation to the point where I thought, “hey, why not install from scratch and start fresh?” So I did.

Evaluating Fedora Media Writer

Recently, I decided to change it up and also use the offical Fedora Media Writer, which replaced Rufus, Linux Live USB, and Balena Etcher as my Installation Media creator of choice for the past few years. Using the tool itself is smooth and polished as one could hope, but I did discover two gripes:

  1. When used against a USB Drive, it partitions the drive to the size required for the installation + 2GB (I may be wrong on the number) for live boot purposes. This means, if I provided it a 256GB USB for example, it makes resizing the drive to leverage the lost 200+GB a PITA due to it’s known behaviour which requires modifying the USB’s partitions from the live boot itself vs on your host machine which created it.
  2. I’m not sure if it’s based around user error, hardware failure, or a software bug, but there’s a good 50% chance that a USB formatted by Fedora Media Writer becomes unusable once ejected. I had done some research into this when I was testing a few weeks ago, and am looking into how to fix two separate USBs which are experiencing the same issue after using Fedora Media Writer on two separate machines. Working out how to resolve that issue as I type this. Anyways, that’s another blog post if I can figure it out.
    1. Update: I managed to restore the 256GB which was eaten by FMW simply by booting into it (I guess Fedora couldn’t read the UEFI boot sector?) and removing the fat32 boot sector. So, with that I booted into normal Fedora and reformatted the drive back to normal. Less annoyed now that I’ve fixed it, but still something to investigate for the other USB.

Installing Linux distributions has improved year over year, to the point where even Arch Linux received a newly polished graphical installer which I find humorous, but a fantastic step in making the distro accessible to more people. In my experience, installation of Fedora has always been a painless experience even when tied with manual configuration or partitioning. So, let’s skip the easy stuff and show what it looks like upon first boot after install with only a small change: the background wallpaper.

Stock Fedora Installation

Customized Fedora Installation

Setup & Configuration

The last time I setup Fedora was in 2019, and later in 2021 I took the current state and immortalized it using Ansible for a few-click setup of my developer environment. More can be read on that here, but this setup removes my previous decisions and automation so that I could evaluate and be intentional about every customization I make on the fresh installation.

Gnome Desktop

An interesting item that those who know me close will notice when looking at my setup: I didn’t find any reason to change the default desktop theme to a third-party theme (which, my Ansible has done since it’s implementation). At most, I switch from the modest light semantic to my preferred desktop theme: dark. Gnome 42’s dark theme pairs well with the few extensions I add:

  • Blur My Shell → To bring the few outstanding eye sores into a modern look
  • Clipboard → Because who doesn’t want a clipboard?!
  • Pop!OS Shell → I’m in love with Tiling Window Managers, but find them inaccessible to those who also want a normal desktop experience or live in a world which is aware of non-terminal applications. This extension fulfills that ask by providing robust tiling to Gnome.
  • GSConnect → At the start of the year, I switched to the Google Pixel 6 Pro after having the iPhone 11 Pro max since it’s release, and wanted to test out the KDE Connect + Android possibilities. So far, I’ve been very happy with being able to message, transfer photos and files, and even issue commands between my desktop and phone.

SHELL & CLI Tools

Since discovering fish I’ve yet to go back. The times where I miss having a POSIX compliant shell is quickly dismissed when remembered the load-times of a fully customized zsh environment which could mimic Fish’s out of the box experience.

Language Servers

CLI Replacements

Fonts & Icons

Though I change fonts and color schemes weekly it feels, I’ve attempted to implement a rule which reduces straying too far into the illusion of choice concept. Essentially, Any font I use must be found within the standard Fedora repositories (and installed via dnf), or the rpm fusion repositories which is enabled of course as well. The following fonts are what I install and move about as the mood hits me:

For icons, though the latest batch of adwaitia icons look great, there’s something about the unified look which dedicated icon packs provide that I can’t help but miss when not in use. My icon pack of choice is papirus

Adwaitia Icons

Papirus Icons

Developer Environment

Before my Fedora 36 install, I was messing with Doom Emacs to compare against my primary development environments: Visual Studio Code & Neovim. It was an interesting two-week experiment, and I got fluid enough with the tool that I even drafted and published my last blog post all from Emacs itself. Perhaps I’ll explore more this summer, but for now I’ve gone back to the battle tested setup until I get annoyed once more.

Neovim

Used primarily for small edits, quick note taking. In the past, I’ve tried to use Vim for more tasks and contexts, attempting to approach an emacs-like environment where from a single editor I could do everything. VIM doesn’t scale as well on that area, so I found myself regressing back to using it as a text editor alone, which led me to mess with the concept of Doom Emacs. After that experiment, I was content with what I had learned and the possibility of using it more in the future, but I found that I was still more productive in Visual Studio Code for when I need a dedicated development environment.

In the past, I had a 200+ vimrc file which configured the editor with a whole bunch of plugins and packages; setting color-schemes, enabling language server support for Go and Rust; formatting markdown tables to always look neat. I found NvChad in the past year, and have migrated to using it plus a far far smaller init.lua file to set the few items that it doesn’t set by default.

Visual Studio Code

No matter how many times I try to escape the embrace of Visual Studio Code, I always find myself returning. It’s one of the best IDEs for any language once configured, and more so is powered by incredible tooling underneath & extensions which tailor the experience. In my Ansible setup, I included at least 20 extensions which are installed during the playbooks run through, but overtime I’ve found myself not needing so many or preferring to keep my Visual Studio Code setup as lightweight as possible. Here’s the standard extensions which I have installed:

  • Better TOML → Better TOML Language support
  • Github Copilot → Your AI pair programmer
  • Go → Rich Go language support for Visual Studio Code
  • Nord → An arctic, north-bluish clean and elegant Visual Studio Code theme.
  • Paste JSON as Code (Refresh) → Copy JSON, paste as Go, Typescript, C#….
  • Rust → Rust for Visual Studio Code
  • Todo Tree → Show TODO, FIXME, etc. comment tags in a tree view

Closing Notes

An item that I’ve become increasingly aware of is my lack of interest in tweaking 24/7. Gone are the days full of testing & messing around, here are the days where my patience and focus demands stable & accessible. Sure, I may change the fonts & icons to fit my mood and interests for the day, but I don’t find myself constantly wanting to distrohop, try out other desktop-environments or mess with Nvidia drivers where It’s not needed. Part of this I can attribute to Gnome’s recent direction and releases, which compared to the earlier Gnome 3.X days, is a breath of fresh air and much needed.

by Ray Gervais at Sun May 22 2022 00:00:00 GMT+0000 (Coordinated Universal Time)

Monday, May 16, 2022


David Humphrey

HTTP Testing with Hurl in node.js

The JavaScript ecosystem has been benefiting lately from pieces of its dev tooling being (re)written in Rust.  Projects like swc, Parcel 2 and parcel-css, deno, dprint and others have brought us tremendous performance improvements with tasks like bundling, formatting, etc.  Recently, my favourite Rust-based, HTTP testing tool gained the ability to be run in node/npm projects, and I wanted to show you how it works.

Hurl is a command-line tool for running HTTP requests defined in simple text files (*.hurl).  I learned about it by chance on Twitter over a year ago, and have been using and teaching it to my programming students ever since.  The name comes from the fact that it builds on top of curl's HTTP code. The real benefit to Hurl is that it lets you write simple, declarative tests that read just like the HTTP requests and responses they model.  Oh, and it runs them ridiculously fast!

Here's an example test file that makes sure http://example.net/404.html returns a 404:

GET http://example.net/404.html

HTTP/1.0 404

You can get much fancier, by setting headers, cookies, auth, etc. on the request and assert things on the response, including using JSONPath, XPath, Regexes, and lots of other conveniences.  You can also capture data from the headers or body, and use these variables in subsequent chained requests. The docs are fantastic (including this tutorial), and go through all the various ways you can write your tests.

Here's a slightly more complex test, which uses a few of the techniques I've just mentioned:

# 1. Get the GitHub user info for @Orange-OpenSource
GET https://api.github.com/users/Orange-OpenSource

# 2. We expect to get back an HTTP/2 200 response. Also, assert
# various things about the Headers and JSON body. Finally
# capture the value of the `blog` property from the body into
# a variable, so we can use that in the next step.
HTTP/2 200
[Asserts]
header "access-control-allow-origin" == "*"
jsonpath "$.login" == "Orange-OpenSource"
jsonpath "$.public_repos" >= 286
jsonpath "$.folowers" isInteger
jsonpath "$.node_id" matches /^[A-Za-z0-9=]+$/
[Captures]
blog_url: jsonpath "$.blog" 

# 3. Get the blog URL we received earlier, GET it, and make
# sure it's an HTML page
GET {{blog_url}}

HTTP/2 200
[Asserts]
header "Content-Type" startsWith "text/html"

I've been using Hurl to write tests for node.js HTTP APIs, especially integration tests, and it's been a joy to use.  I still write unit tests in JavaScript-based testing frameworks, but one immediate benefit of adding Hurl is its speed, which helps shake out race conditions.  Many of my students are still learning asynchronous programming, and often forget to await Promise-based calls.  With JavaScript-based test runners, I've found that the test runs take long enough that the promises usually resolve in time (despite not being await'ed), and you often don't realize you have a bug.  However, when I have the students use Hurl, the tests run so fast that any async code path that is missing await becomes obvious: the tests pass in JS but start failing in Hurl.

I also found that Hurl is pretty easy to learn or teach.  My AWS Cloud students picked it up really quickly last term, and I think most node.js devs would have no trouble becoming productive with it in a short time.  Here's what one of my students wrote about getting started with Hurl in his blog:

"The learning curve is pretty simple (I managed to learn the basics in a couple of hours), less setup todo since it's just a plain file, the syntax is English friendly, besides the jsonPath that could take some times to adapt."

As I've been writing tests and teaching with Hurl over the past year, I've been pretty active filing issues. The devs are really friendly and open to suggestions, and the tool has gotten better and better with each new release.  Recently, I filed an issue to add support for running hurl via npm, and it was shipped a little over a week later!

Installing and Using Hurl with npm

Let me show you how to use Hurl in a node.js project.  Say you have a directory of *.hurl files, maybe inside ./test/integration.  First, install Hurl via npm:

$ npm install --save-dev @orangeopensource/hurl

This will download the appropriate Hurl binary for your OS/platform from the associated release, and create node_modules/.bin/hurl which you can call in your scripts within package.json.  For example:

"scripts": {
  "test:integration": "hurl --test --glob \"test/integration/**/*.hurl\""
}

Here I'm using the --test (i.e., run in test mode) and --glob (specify a pattern for input files) options, but there are many more that you can use.  NOTE: I'm not showing how to start a server before running these tests, since that's outside the scope of what Hurl does.  In my case, I typically run my integration tests against Docker containers, but you could do it lots of ways (e.g., use npm-run-all to start your server before running the tests).

In terms of Hurl's output, running the two tests I discussed above looks like this:

npm test

> hurl-npm-example@1.0.0 test
> hurl --test --glob *.hurl

expr=test1.hurl
test2.hurl: RUNNING [1/2]
error: Assert Failure
  --> test2.hurl:14:0
   |
14 | jsonpath "$.folowers" isInteger
   |   actual:   none
   |   expected: integer
   |

test2.hurl: FAILURE
test1.hurl: RUNNING [2/2]
error: Assert Http Version
  --> test1.hurl:3:6
   |
 3 | HTTP/1.0 404
   |      ^^^ actual value is <1.1>
   |

test1.hurl: FAILURE
--------------------------------------------------------------------------------
Executed:  2
Succeeded: 0 (0.0%)
Failed:    2 (100.0%)
Duration:  174ms

As you can see, both tests are failing.  The error message format is more Rust-like than most JS devs will be used to, but it's quite friendly.  In test2.hurl, I've got a typo in $.folowers, and in test1.hurl, the response is returning HTTP/1.1 vs. HTTP/1.0.  A few quick fixes and the tests are now passing:

$ npm test

> hurl-npm-example@1.0.0 test
> hurl --test --glob *.hurl

expr=test1.hurl
test2.hurl: RUNNING [1/2]
test2.hurl: SUCCESS
test1.hurl: RUNNING [2/2]
test1.hurl: SUCCESS
--------------------------------------------------------------------------------
Executed:  2
Succeeded: 2 (100.0%)
Failed:    0 (0.0%)
Duration:  169ms

Part of what's great about Hurl is that it isn't limited to a single language or runtime.  Despite the title of my post, Hurl isn't really a JS testing tool per se.  However, being able to "npm install" it and use it as part of your local or CI testing adds something new to your testing toolkit.  I still love, use, and teach tools like Jest, Playwright, and others, but I'm excited that JS devs now have an easy way to add Hurl to the mix.

Hopefully this will inspire you to try including Hurl in your node.js HTTP project testing.  I promise you that you'll write less test code and spend less time waiting to find out if everything works!

by David Humphrey at Mon May 16 2022 20:02:14 GMT+0000 (Coordinated Universal Time)

Thursday, May 12, 2022


Eugene Chung

What I learned from Project 1 of Udacity's Data Science with Python bootcamp

Introduction

As part of the project I completed successfully, I used SQL to explore a database related to movie rentals. SQL queries was ran and visualizations built to showcase the output in pdf. With my prior knowledge of SQL I learned at Seneca, I was able to complete this section in a day (3 weeks worth of work). I did this on my own time while I am on my work term with City of Toronto.

Page 1:

Page 2:


Page 3:


Page 4:



    For issues with plagerism, I won't show my code at this time.  However, I may post what I have for my next project which is a python bike sharing application.  The final project just has to do with Git and source control so I should be fine.  I was impressed to learn the different options in git log and my horizons expanded as I honed in on my skills with github and git.

Kind Regards,
Eugene Chung



by Eugene Chung at Thu May 12 2022 13:46:00 GMT+0000 (Coordinated Universal Time)

Monday, May 9, 2022


David Humphrey

Teaching AWS and Cloud Computing for Programmers

I'm beginning another term today, and the majority of what I'm teaching this time is Cloud Computing using AWS.  I spent a good part of 2021 researching and developing this course, and taught it for the first time during the winter semester.  Now that I've arrived at the "teach it again" stage, I wanted to reflect on it a bit.

We offer a number of courses on cloud computing already, but nothing tailored to developers.  As is so often my motivation, I wanted a course that provided a ready-made path for programmers to take, one which avoided the meandering, haphazard way that I had to learn it.

I decided to begin by asking friends in industry what they thought I should include and avoid.  I reached out to colleagues, former students, and friends working at big companies (FAANG), startups, and in government.  I spoke with people working in media, e-commerce, banking, the energy sector, and social media.  It was fascinating to hear the different perspectives they had, and where they agreed or disagreed.

"What should I teach a junior developer about the cloud?"

Here' some of what I heard:

  • "Everyone uses the cloud." Having cloud experience is really important for being able to go after good jobs in tech.
  • "The cloud is enormous.  You can't teach all of the cloud in a single course.  Your students are going to be overwhelmed".  Everyone is overwhelmed by it.  Focus on breadth over depth.
  • Focus on a single cloud.  Don't bother with multi-cloud
  • "The cloud is primarily Linux."  Make sure they know how to use it.  The cloud is glued together and automated with command-line scripts.
  • "The programming language you choose doesn't matter."  Use node, python, Go, whatever you want, they are all fine, but pick one you already know so you're not learning two things at once (our students know node.js the best, so I use that)
  • "Everything in source control. Period. Always."  Knowing git and GitHub is critical, and also that the entire lifecycle of software changes happens in git (proposal, implementation, testing, deploying).  Force students to work entirely in git/GitHub for everything.
  • "The cloud is cattle, not pets."  As quickly as possible, move them away from thinking about logging into machines to do manual tweaks, and instead think about code and automation
  • A lot of people said some version of "'It works on my computer' isn't useful," "Your code isn't useful if it isn't running in production," or "Cloud is what happens after you write your code."  Everyone said some version of "CI/CD pipelines are critical for a junior dev to understand."
  • "Most cloud workloads are run in containers." Almost everyone told me to focus on containers vs. manually using cloud instances, and to learn how to use them in dev, testing, CI/CD, and production.  "Docker and compose are good choices at this stage"
  • "Kubernetes is really important" and also "By no means should you teach Kubernetes in this course!" since it's too much ("even for industry").  Leave it for later in their journey
  • "Help them understand the cloud's secret sauce: managed services."  Learn how to leverage them in your applications vs. running your own.
  • Security becomes a central concern in the cloud.  Understand the principle of least privilege, the importance of the software supply chain, how to cope with dependencies, etc.  Learn to use tools to help manage the complexity.
  • Similarly, privacy matters more because all your code and data are literally in the cloud now.  Understand the importance of limiting the data you collect/store (what if there's a breach?), and why Personally Identifiable Information (PII) is suddenly a concern in things like log messages.
  • Make sure they know how to manage configuration and secrets properly
  • Use structured logging everywhere and log aggregation/observability tools to deal with things at scale
  • Because "everything is always failing" in the cloud, you have to write your software with different expectations
  • You have to understand the pricing structures of your choices and how to avoid a massive bill.  The paradox of the cloud is: "The cloud is cheap" but "The cloud is expensive."  You can fix things by throwing money at your problems, or you can understand and use better designs. Tagging helps you figure out costs later on.
  • Almost everyone I spoke to de-emphasized serverless, which surprised me--I thought it would be near the top of their list, but no one I spoke to thought it was critical to learn at first.  I've come to the conclusion that it should almost be its own course vs. something I do in this one (maybe it should be the next one I make)
  • Show them how to manage resources manually via the console, but also how to use Infrastructure as Code (IaC) to automate it
  • "Learn AWS" - most people agreed that AWS isn't the easiest option, but is the most valuable to learn.

Course Outline

Based on the feedback I got, I developed a course based on AWS that works through the following major topics:

  • Cloud Computing and AWS
  • Using the AWS Console, CLI, and SDK to manage AWS resources
  • Securing apps with Amazon Cognito User Pools, OAuth2
  • Configuring apps with Environment Variables and Secrets
  • Using git and GitHub to manage source code
  • Using GitHub Actions to create a Continuous Integration (CI) workflow that runs Static Analysis, Unit Testing, and Integration Testing
  • Using and Managing EC2 instances
  • Working with docker, authoring Dockerfiles, docker-compose, and Docker best practices
  • Working with public and private Docker registries to push, pull images (Docker Hub and Elastic Container Registry)
  • Using GitHub Actions to create a Continuous Delivery (CD) workflow (build and push images to registry, automatic deploys)
  • Deploying and running containers on AWS (manually and automatically as part of CD workflows)
  • Running containers in CI/CD for integration testing, and simulating AWS with docker-compose (localstack, dynamodb-local, etc)
  • S3 for object storage
  • DynamoDB for NoSQL
  • Infrastructure as Code and CloudFormation

Along the way, I have them build an HTTP REST API microservice, and slowly integrate more and more pieces of AWS, evolving their approach as they go.  Over 10 labs and 3 assignments, they get to work with nearly a dozen AWS services and maintain a single app for 14 weeks.

AWS Academy Learner Lab

When I started my course development, I decided to target AWS Educate.  It promptly disappeared a few months before I was set to do the first offering ("...everything failing all the time" right?).  I had to quickly pivot to Amazon's new offering, AWS Academy.

The majority of what's offered through AWS Academy is pre-canned, lab-based courses that can be delivered at any academic institution.  I'm not sure who the audience is, because I don't know too many professors who work this way (I always develop and create my own courses).  However, one of the "courses" is called the Learner Lab, and it lets students access AWS resources without a particular course pathway.

To use AWS Academy, an academic institution first has to become a member (luckily, my institution already was).  Then, you have to get "nominated" by an existing AWS Academy Member before you are allowed to create an Educator account.  After this, you have to work through a number of Educator Orientation and On-boarding modules (these took me 1/2 day).

Once you've jumped through the necessary hoops, you can start to create Classes and invite your students to create Student accounts.  You essentially get a Learning Management System on top of AWS.  I didn't use any of its features (we have our own LMS), but you could, and it seemed well made.

What's nice about the Learner Lab is that students don't need to create their own AWS Account and never need to enter a credit card (this is huge).  Upon creating their account, each student is given $100 credits to use during the course.  If they are enrolled in multiple courses, they get $100 per course (i.e., vs. per student).  Free tier spending doesn't get counted against this $100, so it goes pretty far.

A student's credits cannot be increased or renewed.  And, students being students, it's something to be aware of since any number of things can happen that might mean a student gets locked out of the lab before the course is over.  However, students being students, you also aren't going to wake up to a $10K bill in the middle of the term.  It's a trade-off, but I think it mostly works.

The Learner Lab is essentially a sandboxed AWS account.  You log in to AWS Academy and "Start" the lab environment.  Doing so activates a pre-made AWS Account, which runs for 4 hours before being shut down.  If you need to extend your time, click "Start" again and you get another 4 hours.  While the lab is running, you can use the AWS Console, or other AWS APIs like you normally would.  When the lab is stopped, services like EC2 instances are paused (they get restarted when the lab is restarted).  However, many services still keep working.  For example, S3 buckets, DynamoDB tables, even EC2 instances that are being managed by other services stay up (e.g., Elastic Beanstalk).  It's a little hard to say what is and isn't running when you stop the lab, and therefore what is and isn't costing you credits.

This simplicity is also one of the downsides. Since you have almost zero ability to drill into an account and figure out what is currently running or where your cost is coming from, you only know that you've spent "$23," and that's it.  I had one student come to me in a panic when he noticed he'd suddenly spent $70 in two days.  "What's causing this!?"  Great question!  All of the usual ways you'd figure this out in AWS are not accessible in the Learner Lab, so good luck tracking it down. Thankfully a professor can connect to a student's AWS workarea, and look around (also useful for evaluations, where you need to check how things are being used).

The Learner Lab account has access to ~50 AWS services in one of us-east-1 or us-west-2. This includes things like CloudFormation, Cloud9, CloudWatch, EC2, EBS, ELB, Lightsail, Rekognition, S3, SageMaker, RDS, SNS, SQS, etc. which covers a lot.  But it also leaves out some strange things, for example, no IAM, only 1 of the 17 ways run containers, no way to do API Gateway with Lambda, no Route53, etc.  If what you want to do is available, it generally works great, but some services have extra limitations.

For example, with EC2 you can only run Amazon Linux or Windows AMIs, and the largest instance type you get is r5.large (2 vCPU, 16 GIB RAM).  However, you can run up to 32 vCPUs in parallel, so you can run quite a few instances at once.

The setup works, but it's not perfectly aligned with how most CS departments think about using computing resources.  Most profs I know don't only give isolated labs.  You have project work that builds week to week, and the ability to work with long-lived resources over the term is important.  There was one point in the winter where all of the Learner Lab AWS resources got deleted (I mean for everyone, not just my students!).  The AWS Academy mailing list of other professors around the world came alive as all kinds of people talked about term work being lost and what a disruption it was.  It was pretty clear that people assume you can do term-based work in addition to discrete labs.

I think Amazon imagines a world where you use CloudFormation templates to work with stacks per lab.  That's one way to solve this, but you can't start learning AWS with CloudFormation, at least I don't know how you'd teach it that way.  Students need to work in the console manually for weeks or months before they can be expected to automate everything.

Another thing making this harder than it needs to be is the fact that that many third-party IaC or other automation tools are hard to use with the Learner Lab because your credentials get rotated every time you start/stop the lab environment.  Imagine you need to use AWS credentials in a CI/CD pipeline, but they change every time you do your work.  I found ways around it through careful ordering of topics, and adding non-AWS services into the mix, but it felt like an unnecessary limitation.  My requests to Amazon to fix it were met with, "We'll look into it."

Stats

The Learner Lab gives you some limited analytics.  Using these reports, I can see that the average spend per student during the winter was $8.55 (total, for the whole term), and the average lab time was ~120 hours.  Only one student hit $80 (he accidentally reserved a dedicated macOS instance for a few days without understanding what that meant), and another spent 336 hours in the lab.  Time in the lab doesn't cost more per se, but it means resources are running longer.  I think it's great to see people being curious and exploring.

The majority of what we did fit easily within the Free Tier. I was pretty nervous about how what I wanted to do would translate into per-student cost, since a professor can make recommendations (do this, please don't do that), but you never know what your students will do in reality.

I've learned that I could be more aggressive with what we spend and not run out of room.  Even with everything I did, I only managed to spend $20 (the Learner Lab includes a Test Student account, which professors can use to work like a student).  I'll see if this term's numbers match up with this conclusion, then slowly start turning up the volume.

Conclusion

Overall, I'm pleased with the whole thing.  The course prep and research was fascinating, but the development was overwhelming.  I wasn't sure what the students could and couldn't handle, but thankfully my first group proved that the idea will work.

I'm looking forward to updating the labs and projects in the coming terms to add different services, and expose the students to new corners of AWS.  I'm also looking at ways to add AWS and the Learner Lab to other courses I teach.  It's an obvious fit for my open source students, who need access to ephemeral development environments.  I'm not sure if it would be too complicated for my web students.  We'll see.

Wish me luck as I take a new (larger) cohort up the AWS mountain!

by David Humphrey at Mon May 09 2022 19:11:41 GMT+0000 (Coordinated Universal Time)

Sunday, May 1, 2022


Ray Gervais

Exploring Github Copilot During a Hackathon

Or, how to distract yourself with technology while trying to build new technology at breakneck speeds

Building a Golang-powered Back-end

I can remember it like it was just yesterday, I had a long weekend to myself planned to the brim with music and binging Lego Star Wars: The Skywalker Saga. It was the mental and youthful release that I was looking forward to after doing what felt like two weeks of continous overtime to meet a deadline. Then, out of nowhere while chatting with the now CSS-PRO @JamesInkster, the following (reenactment) exchange occurred:

James: I want to work on a project again Me: Bet. What if we did a hackathon this long weekend? James: Sounds like a plan.

And thus, my dreams of Lego and Music would wait a few more days as we dove into the unknown. Instead, we put together a semantic which would define the project and the requirement that similar to a well done Hackathon project, it would be fully working & presentable by the end of the weekend.

After much brainstorming, James came up with the idea of Down 4 Whatever, which was an application based around the concept of throwaway prompts that could be responded to within an hour of creation. It had potential, and followed the similar REST + Front-end paradigms that we knew where to go when starting. James wanted to focus on the CSS & front-end, which I was happy to avoid, so we decided I’d do the back-end which would serve the front-end similarly to the coming-back-into-style paradigm of server-side rendering. I chose Go for my back-end, and though it’s not a new technology for me, it allowed me to experiment with design patterns and concepts that I had not yet had a chance to do so.

It was after I had written the database models that I realized it wasn’t just me looking at the code. Upon me going to write the first bit of CRUD functions for the models, I noticed a familiar prompt for the following: func (db *Database) AddRoom(room *entities.Room) error {. GitHub Copilot was active! In a moment of both curiosity and laziness, I thought, let’s see what Copilot can do with my models if I were to ask it to write the CRUD functions. In a way, I could tell James would be annoyed because from a certain point of view, I was bypassing many of the coding requirements that I was responsible for. Still, that depends on your point of view, and mine was more interested in seeing what Copilot would suggest. To summarize, Copilot provided the following CRUD solutions (abridged to just the function headers) with very minimal human interaction:

// entities.Room CRUD
func (db *Database) AddRoom(room *entities.Room) error
func (db *Database) UpdateRoom(room *entities.Room)
func (db *Database) GetRoom(id string) *entities.Room
func (db *Database) RemoveRoom(id string)

// This one took me giving a long-winded comment prompt, but Copilot figured it out
func (db *Database) ListExpiredRooms() entities.RoomList

// entities.Prompt CRUD
func (db *Database) AddPrompt(id string, prompt *entities.Prompt) error
func (db *Database) GetPrompts(id string) entities.PromptList
func (db *Database) RemovePrompt(id string, promptId string)

// entities.Comment CRUD
func (db *Database) AddComment(id string, comment *entities.Comment) error
func (db *Database) GetComments(id string) entities.CommentList

For the full source code, you can find the repository at github.com/raygervais/dfw. What I found interesting is while implementing the CRUD for the sub-resources (Prompt, Comment), Copilot took into account always retrieving the room associated with the Id provided in the arguments. Each function which interacts with a single room object has the following at the start:

room := db.GetRoom(id)
if room == nil {
    return errors.New("Room not found")
}

I find it really interesting that the suggestions also took into account my design, which was to have no true database and instead store the items in memory for their lifetime.

Feeding the Copilot Ideas

At one point close to the end of our weekend-hackathon experiment, I wanted to depart from the current project and just mess with something else; the idea being the break would help realign and inspire when we returned to finish the project. James had not really seen or worked with GitHub Copilot before, so I thought it would be awesome to show him why it was so hyped during it’s beta release. Yet, we didn’t arrive to messing with it organically. Instead, we were doing leet-code styled questions purely to compare approaches to the solution. Upon providing my solution in JavaScript below (and not accounting for run-time / space complexity), I wondered if I could use it as a prompt for Copilot.

function rounds_won(scores) {
  let playerOneScore = scores.filter((score) => score === 1).length;
  let playerTwoScore = scores.filter((score) => score === 2).length;

  return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore;
}

So, given the algorithm above which determines from an array input (ex, [1,2,1,1]) which player won the more games, could Copilot recreate the function’s logic in various languages with no assistance aside from the language prompt? I stumbled upon this experiment purely by accident when I noticed below my function, copilot would suggest Language: {language} if I provided // {language} in the previous line:

// C#

So, I figured let’s see what it would do with the language prompts.

C#

// C#
// Language: C#
public int RoundsWon(int[] scores)
{
    int playerOneScore = scores.Count(score => score == 1);
    int playerTwoScore = scores.Count(score => score == 2);

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore;
}

I found it really intruging that unlike my JavaScript implementation, the C# version used dot.NET naming conventions such as RoundsWon. In similar fashion, below the function I was provided a prompt for another language. Here’s a sample of what copilot prompted and provided:

C++

// C++
// Language: C++
int rounds_won(vector<int> scores) {
  int playerOneScore = 0;
  int playerTwoScore = 0;

  for (int i = 0; i < scores.size(); i++) {
    if (scores[i] == 1) {
      playerOneScore++;
    } else if (scores[i] == 2) {
      playerTwoScore++;
    }
  }

  return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore;
}

Quite interesting that instead of using a vector iterator by default, copilot suggested using for loops for languages where it’s more commonly found.

Python

# Python
# Language: Python
def rounds_won(scores):
    playerOneScore = scores.count(1)
    playerTwoScore = scores.count(2)

    return playerOneScore > playerTwoScore and playerOneScore or playerTwoScore

For Python, .count(x) was leveraged instead of a for or filter loop. Quite efficient!

Ruby

# Ruby
# Language: Ruby
def rounds_won(scores)
  playerOneScore = scores.count(1)
  playerTwoScore = scores.count(2)

  return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore

Almost indistinguishable from the Python version, but notice that it’s not a direct copy-paste? Here, Copilot has omitted the : and instead is using a ternary ?: statement.

Haskell

-- Haskell
-- Language: Haskell
rounds_won :: [Int] -> Int
rounds_won scores =
  let playerOneScore = length $ filter (== 1) scores
      playerTwoScore = length $ filter (== 2) scores
  in if playerOneScore > playerTwoScore then playerOneScore else playerTwoScore

It was around here that I started to clue in that Copilot is also aware of different language conventions such as the character(s) which define the starting of a comment.

Clojure

;; Clojure
;; Language: Clojure
(defn rounds-won [scores]
  (let [playerOneScore (count scores 1)
        playerTwoScore (count scores 2)]
    (if (> playerOneScore playerTwoScore)
      playerOneScore
      playerTwoScore)))

Swift

// Swift
// Language: Swift
func roundsWon(scores: [Int]) -> Int {
    let playerOneScore = scores.filter({ $0 == 1 }).count
    let playerTwoScore = scores.filter({ $0 == 2 }).count

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore
}

Nice! It even got the quirks of Swift’s $0closure. Truthfully, this one wasn’t provided as a prompt till I provided // Swift at the start, then Copilot was in sync with my ideas.

Go

// Go
// Language: Go
func roundsWon(scores []int) int {
    var playerOneScore = 0
    var playerTwoScore = 0

    for i := 0; i < len(scores); i++ {
        if scores[i] == 1 {
            playerOneScore++
        } else if scores[i] == 2 {
            playerTwoScore++
        }
    }

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore
}

Objective-C

I figured, let’s throw it a curve ball.

// Language: Objective C
- (int)roundsWon:(NSArray *)scores {
    NSInteger playerOneScore = [scores filteredArrayUsingPredicate:[NSPredicate predicateWithFormat:@"self == 1"]].count;
    NSInteger playerTwoScore = [scores filteredArrayUsingPredicate:[NSPredicate predicateWithFormat:@"self == 2"]].count;

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore;
}

Mmmm yes. Clearly valid Objective-C with the amount of brackets found in a three line function.

Java

// Java
// Language: Java
public int roundsWon(int[] scores) {
    int playerOneScore = 0;
    int playerTwoScore = 0;

    for (int i = 0; i < scores.length; i++) {
        if (scores[i] == 1) {
            playerOneScore++;
        }

        if (scores[i] == 2) {
            playerTwoScore++;
        }

    }

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore;
}

Another language where I’m surprised copilot went with the tried-and-true for loop vs iterators or other functional methods.

V

// V
// Language: V
func roundsWon(scores: [Int]) -> Int {
    var playerOneScore = 0
    var playerTwoScore = 0

    for score in scores {
        if score == 1 {
            playerOneScore++
        } else if score == 2 {
            playerTwoScore++
        }
    }

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore
}

PHP

// PHP
// Language: PHP
function roundsWon($scores) {
    $playerOneScore = count($scores, function($score) {
        return $score == 1;
    }

    $playerTwoScore = count($scores, function($score) {
        return $score == 2;
    }

    return $playerOneScore > $playerTwoScore ? $playerOneScore : $playerTwoScore;
}

If I had this 6+ years ago, I might have enjoyed my internship writing PHP everyday.

Rust

// Language: Rust using functional paradigms
fn rounds_won(scores: &[i32]) -> i32 {
    let playerOneScore = scores.iter().filter(|&score| score == 1).count();
    let playerTwoScore = scores.iter().filter(|&score| score == 2).count();

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore;
}

Closing Thoughts

Let me preface first that aside from my original solution, I did not test any of the prompts and solutions provided by Copilot. Instead, I used the overall solutions as a thought experiment to compare what the AI was suggesting for each language vs how I’d do it in each language.

  • The original implementation of the Rust solution leveraged for loops, but I found it really interesting that providing the prompt // rust solution using functional paradigmsprovided a vastly different solution which used the filter() function.
  • I’m absolutely dumbfounded to think of the amount of training data which went into Github Copilot to the point where it’s learned the idioms of specific languages (such as C#’s naming semantics vs the original solution, the use of := in the go solution, etc), but can also see where more training is needed given that each variable per-language still uses the same variable naming-scheme of my original solution.
  • Code reviews between junior and senior developers are going to become quite the gray area, because I can already tell how easily some will use the exact prompts provided by Copilot as their final solution. I understand the appeal, but I fear that it’ll incentivize a generation of developers who don’t understand the codebase they are writing.

Resources

by Ray Gervais at Sun May 01 2022 00:00:00 GMT+0000 (Coordinated Universal Time)

Wednesday, April 27, 2022


Joel Azwar

OSD700 Afterthoughts

Note: posted a little late but I wanted to wrap up the OSD700 conclusion

So we've finished the Winter 2022 semester and did I set myself up for a tough one. I have to admit it was underestimated my ability to keep up and a lot of things were piled up on top of school, it was a little hectic to say the least. Nevertheless, I still want to wrap up, and conclude my OSD700 journey.

Release 3.0.1 .... and .2, .3, and

so on and so forth. Safe to say we haven't 100% "successfully" shipped the release. But it's up and running okay on production.... at least it looks like it. Our Supabase migration didn't quite turn out as well as hoped, so we've got a major bug on prod right now and our feeds won't update with new posts. 

Tue really deserved credit for doing majority of the debugging and releasing, Duke also has been trying to come up with fixes for whatever's going wrong in prod. But right now I think everyone is trying to catch a break, the semester's just ended, everyone had a pretty heavy last 2 weeks and I'm sure some of us are slowly but surely monitoring telescope to eventually get this feature going right.

I hope that eventually we find that fix and get Supabase up and running on prod. That was one of Dave's last wishes coming up to the 3.0 release

OSD700

So we've concluded the OSD700 course and I want to wrap it up and share my thoughts on the whole journey.

Telescope

Telescope when I started working on it again at the beginning of this semester, was a far different web app then when I had worked on it 2 years ago in OSD600. It had all these new microservices, a new front end, satellite as well, it was overwhelming at first but eventually I grew familiar with Telescope and was able to contribute all the while learning new technologies at the same time.

It was great learning even more about Docker, especially considering the fact that contributing to Telescope 2 years ago in OSD600 was what taught me Docker in the first place. I had never worked with microservices before, so jumping into it in Telescope was such a great opportunity. Tools like pnpm were also really neat, since there's an insane amount of dependencies in our telescope monorepo. Then there were new technologies that the team have been implementing towards 3.0. There's Supabase of course, which is just perfect for this course so much since its Open Source!

It's always nice working on telescope because it's a guarantee you'll come out of it with learning something new. Something that was new to me this semester was also Sheriffing

Project Managing

Sheriffing was something I hadn't done before, taking lead in a team of really talented software developers and having a go at the wheel. At first you essentially keep everyone in check, know a little bit about everything, and help steer the direction to where the next release is heading. But nearing 3.0, Dave wanted us to prioritize different features than the previous releases, he emphasised on wanting new code, so the Sheriff's from that week onward had to help work Telescope towards that direction

It's funny seeing everyone take turns getting out of their comfort zone and see the different approaches in engaging with everyone and getting responses out of them. For Example, Alex would really take charge in her weeks as Sheriff. Although she might not have lots of knowledge on the different areas of Telescope, she tries to leverage her organizational skills to present the team with issues she's deduced as high priority. She's also been pushing the use of Github Projects, which is a kanban board. Meanwhile, someone like Jerry, who is a great programmer and has pretty decent knowledge on most areas of Telescope, makes it easier for him in triages to talk about different issues.

I enjoyed my weeks as sheriff, it helped me gain insight somewhat on what PMing is like, leading an hour and half long meeting (lots of talking), getting familiar with the different areas of the project, and really taking ownership of it. I've also got to hand it to my co-sheriffs, Kevin and Roxanne, they really carried the Sheriff responsibility in those weeks.

Blogging

I lacked in this area a whole lot and it was due to the fact that I wasn't motivated or had any inspiration to blog about the progress I had made towards telescope those weeks. And admittedly it's because I didn't spend enough time working on telescope to have anything of value to write about. My courses this semester were heavy and I had mismanaged my time. It's either that or maybe I was procastinating and making excuses. Regardless I still feel like I could've put a lot more effort working on telescope to contribute more significantly.

Despite that, I still want to say all I can about my favourite things about this course and not what I regret.

Conclusion

The too late didn't read version is OSD700 is an amazing course, and I'm sure most if not all my peers agree with this sentiment. Is it an easy course? Definitely not, but is it hard? That really depends on how much effort you put into it. There's no "assignments" or "marks" and it doesn't really feel like a class, it feels more like a team of software developers that need to come together and all bear responsibility in driving a project forward. It's a course that will give back as much as you work for it. But I think most importantly it gives you here a valuable opportunity to work on open source projects (Telescope in this case) with new technologies to learn and a chance to contribute to something. 

If I could redo this course I definitely would, I really enjoyed working with this semester's team of developers, everyone were really talented programmers and always seemed to know much more than me. But Telescope is an Open Source project after all, so I'm hoping I can just come back from time to time to see where it's at and have a chance to work on it again.

Lastly, shout outs to David Humphrey! An amazing professor and even greater mentor. The mastermind behind the course and the real Sheriff of Telescope. If you go to Seneca and have the opportunity to take a course of his (assuming it's related to computing) You definitely should and don't take it for granted :)

Thanks for reading

by Joel Azwar at Wed Apr 27 2022 14:49:00 GMT+0000 (Coordinated Universal Time)

Tuesday, April 26, 2022


James Inkster

Oh Hello, I’ve always hated CSS.

Hello Open-Source Community.

I haven’t been doing much recently in terms of open-source. I ran into a bit of a pickle. I’ve always disliked HTML and CSS in particular, I never truly took the time to understand. Flexbox, CSS Grids, or Column/Row layouts.

So all my sites would end up having some sort of weird formatting as I progressed, that I had no idea if it was the parent element or the child element creating the issue. I’d ask myself questions like “Am I building this to scale properly?”. I’d get extremely frustrated, and eventually want to ditch a project solely because I didn’t understand CSS and its mechanics.

This is pretty accurate for me.



After about 3 failed attempts at building a portfolio..where the components would always work individually, but when they were put together being completely broken, looking more like a bad rendition of Picasso. I knew I had to grind through the idea of learning layouts, coding strategies, and cool techniques that would help me organize the “code” of the CSS.

I ended up taking a Udemy course on it, far cheaper, and they usually loosely go over the principles, that give you a basic understanding and then show different implementations on it. One of the first things they go over is actually how to organize your CSS files. In particular, this course was using SCSS, so they implemented the use of the 7-in-1 style.

In particular, I find this interesting and neat, because once you understand the idea behind it it’s very easy and accessible to figure out where your CSS issue lies if there is an issue.

Okay cool, so this is a good file structure, and this allows you to figure out which file the CSS that you’re looking for resides in relatively easily. But..what about making it so your HTML classes are easily rememberable, easily located, and easily re-used if need be.
That’s where BEM comes into play. Block Element Modifier can easily let a developer know what part of the code you working on, where it resides, and what kind of element you are working on. This is fantastic for re-usability, quick changes, and even organizing the thought in your head to which code you need to change. I’m someone who has always struggled with being okay with trash code submitted. I want my code to be perfect. I want it to scale accordingly, and I like the idea of someone else coming in and being able to read it cleanly. (Keep in mind this is the opposite of my writing, I almost never re-read my work before submitting it) .

Shame on me.

However, code is different for me, it’s great reading a proper tertiary return statement on a function. Makes my little heart melt. This is why CSS I think has always been so hard for me, it’s weird writing functionless programming. I want to write neat little blocks of code and return something.

Okay, so what was the reason for the blog post? Well, My friend @RayGervais, and I decided to do our own little hack-a-thon last weekend. We set out with a small task, wanted to utilize different technologies and see what we could come up with quickly easily, and at a point where it’s in a “decent” working condition.

We threw around some ideas, and ultimately we decided on a quick application that could be used quickly and easily and is a throwaway. Ray wanted to use a Go back end, and I just legitimately wanted to test my newfound CSS skills.

We decided on a small project that could expand further. Currently, everywhere expects you to sign up, they get access to your data, and privacy and anonymity get thrown out the window. So picture yourself in front of 100 people, And you have a question you want to ask everyone and get their honest feedback. That’s why we created “Down 4 whatever“.

Essentially you can ask a question, you get a 4 letter code, and you’d just need your people to use that 4 letter code to see and answer your question. It runs off similar principles to Jackbox games. After x amount of time, the room will close, and you’ll be able to see all the responses from everyone. No accounts, no waiting, and quick and easy for everyone to use. Simplicity was our target here. You can check out the repo here.

https://github.com/raygervais/dfw

Ray took care of the backend by utilizing “Go” and GitHub co-pilot. Pretty impressive, and I’m sure he’ll write a blog post about that experience.

I took care of the CSS and the HTML. We didn’t feel we needed to utilize a newer framework as it would be overkill for this sort of project. What I did install for development purposes was live-server which mimics similar reloading during development when you update the CSS file or HTML file. A couple other dev dependencies to convert my SCSS to a CSS file, and I was good to go.

I tried my best to follow what I have learned, and one of the challenges I set for myself was to try and create a website that did not need separate CSS for a mobile version. This was a difficult task because it’s all about making sure any element I use can scale relatively well. There are a couple minor mistakes, particularly with input boxes ( i need to wrap them in a div still so that the input CSS will adjust accordingly where ever you use it on our site.)

For the most part, I was happy with what we achieved in under 12 hours. That includes brainstorming, coding, discussing possible options, and even how to merge our two sections clearly. There are some minor fixes that we need to work on is the result page, and there are some minor CSS bugs. But for such a short time, and my first time actually coding CSS with my newfound abilities. I think this is a win.



Also, please don’t just the colour gradient, there was a lot of debate, and nobody was happy no matter what we chose. I almost think I should make it random. Maybe gradient just is not the “in” thing at the moment.

Overall, check out the site, check out the repo. and if you are like me and hate CSS, learning those principles and structured coding made it that much more enjoyable. You’ll have something to build off of and not feel flustered when your CSS breaks everything.

by James Inkster at Tue Apr 26 2022 21:26:15 GMT+0000 (Coordinated Universal Time)

Sunday, April 24, 2022


Gerardo Enrique

Goodbye, OSD700

And so the curtains fall on.

With this last post, I will officially say goodbye to OSD700, a course full of discoveries and learning.

Let's talk about OSD700

I have mentioned several times before, but I would like to mention it again so that the post is self-contained: OSD700 is a course option offered at the college that I am studying, Seneca College. Despite it being a course that counts toward a graduation, we almost never discussed about marks and how I will be graded. The professor in charge of the course stated one thing clearly, "as long as you are contributing and you are showing the effort, you will pass this. Do your best."

The course description does not do justice to the responsibilities that you are given. OSD700 is described as a course where you will have to maintain an open-source project, but there's an important asoect here that it is never mentioned: what does it mean to maintain an open-source project?

Open-source projects and its challenges

Some people may never be interested on maintaing an open-source project, due to the self-sacrifice required. Since it is an open-source project, you should not be expecting to get paid at all. Maybe you could be lucky and open a successful donation campaign, so that you can give full-time attnetion to the project. In other situations, however, the project is a side thing, since you have to focus on your full-time job, right?

In my case, I was able to focus on this course like a full-time job, since I had a light courseload. However, it doesn't matter if you are a full-time or part-time maintainers, the challenges are still the same, they just have to prioritized quite differently.

What are those challenges?

For starters, you have to prioritize what you want to focus at a given time. Assuming you are a full-time maintainer, the ideal is that you can give your best on all areas, but that's just an idealization. Most of the time, you have to give up more work than you thought.

Maybe you had an idea you wanted to implement, so you, all excited, start implementing a prototype. You notice that to realise it to your vision will take more time, so what do you do? If you believe in the cost-sunk fallacy, you might think that it is better to keep developing it until you reach that vision. However, all that time that you spent on developing that feature is time that could have gone to bug-fixing, paying off technical debt, finding bugs to file, or any other tasks that'd still improve the overall health of your repository. So, at the end, you swallow up your pride and say: "it is time to give up on this." It may sound somewhat defeatist, but I think acknowledging that other things have to prioritised is part of what it means to be an open-source maintainer.

Another challenge is the one of not knowing what your end goal. For a lot of people, not knowing where they would end up after embarking on an adventure can provoke anxiety; the uncertainty of it all always make you asking, "am I in the right path?"

However, instead of being scared of that adventure for the rest of your life, there are two opposite views on it:

  1. Find a way to set an end goal, and focus on that end goal until you reach it. When you reach it, try to set another end goal.
  2. Let the adventure take you wherever it may lead you, and just enjoy it.

The first one may work with people who already have experience on a particular set of problems and they would like to have something finished, while the second one is for the people who enjoy the journey more than the treasure at the end.

However in an open-source project, you may need both: you want to get things done, so that others find your project useful, but you also would like to explore and enjoy what you learn along the way, since that will help to stimulate your creativity and develop a new way of solving the problem (and it may help keep your mind from going insane out of boredom).

One more challenge that one may encounter is having to communicate your ideas with your fellow maintainers, if you are in team. The idea is that you are hopefully on a collaborative environment, where everybody is willing to listen to anybody. However, just willing to listen is not enough. You gotta communicate your ideas, even if you think they are bad or that they don't solve the problem. Why? Well, they help you grow as a developer. If your teammates can justify why a certain solution may not be suitable to a specific problem, then you can use to your advantage and learn from their way of thinking. Developers can create more robust code by listening to several situations and cases, so I think that developers can have more robust critical and logical thinking by listening to other ways of solving the problem.

A little bit of retrospective

Back when I started this semester, we were supposed to write a blog about the areas we would like to contribute the most, and being on charge of those areas, too.

In my post, I talked about documentation and dependencies. However, throughout the semester, I mainly focused on the dependency visualization project.

At the start, I had this cool idea of the dependency tree that you could navigate through to discovery all of the dependencies that Telescope uses, but this idea was just cool; in terms of functionality and usability, it was horrible.

After all, the main purpose of the dependency visualization was to make it easier to find GitHub links for other people, so finding an easy way to navigate through hundreds of dependencies was the most important choice. However, before the front-end arrived, we had to write a service that could provide the information that the front-end would need. The service is not that big, and it actually does very little on its own, so it was a manageable project.

Some opinions regarding the dependency-discovery service

If I have to be honest, I want to improve the dependency discovery service. I feel that the API can be improved and better defined. Also, the handling for exceptional cases is almost none, so there's that...

In terms of projects, this is probably the first project I was given total freedom on how to implement and how to design. I was just given a set of requirements, the rest was left for me to figure out, which was somewhat difficult to deal with.

Throughout my life I was always given a task to do, and they told how they want it to get it done, and I was able to follow just that. However, in the real world, most people that tell you what they want you to do are speaking from an area of expertise. Their solution might not be possible to realise, or their solution might not be one at all. This is why they tend to leave certain things vague: they just don't know what to do on a specific case or they might not know that that specific case actually exists. This is somewhat vexing for computer programs that could accept any kind of input, because essentially, you have what some might consider undefined behaviour.

I am aware that the phrase undefined behaviour has a technical meaning when discussing the safety of C programs, but I would like to take the phrase and view it more literally. When something that could happen in a program were to actually happen and you are not sure what the program may actually do, that is what I mean by undefined behaviour. It's the behaviour of a program that is never documented nor expected, and so it ends up being undefined. it is not like this behaviour does not exist, it's just that it is hidden, arising from the consequences of your program. This is where a lot of bugs could occur (in fact, all hidden bugs on a program are due to this phenomenon).

I hate that type of undefined behaviour. Why? Because I hate unreliable programs. If computers are fast, why can't they be correct, too? If I am going to type random words on my text editor, I don't want it to crash on me because I accidentally wrote typed too many keys at once. As the user, I don't know how the program behaves, so I am expecting that, as long as I don't do anything that is apparently unsafe for the program (turning off my entire computer during an update), I am fine with how the program does things. Of course, if the program can prevent any bad consequences even from those unlikely situations, even better, but that's not a strict requirement.

However, as a developer, when you are discovering what your program has to do, an important question always lingers in your head, "will I need this in the future?" Some people say yes, some people say no. Either way, the answer to this question cannot be boiled down to a simple yes or no, but instead it is reduced to the conclusion that the developer can make after years of experience, and even after that, that conclusion might turn out wrong.

In terms of my set of experiences, I cannot provide an answer yet.

What's left of the dependency-discovery?

A lot of stuff, actually.

First of all, we gotta improve how the service itself works. Maybe a way to improve memory usage, since we cannot store so much information at a time, even though we would like to to save on GitHub calls...

We could improve on the current API so that it is easier to use. For example, the /projects route does not provide pagination, so you will get all names at once, which can be annoying for interfaces implementing pagination on their end.

Another thing that could be done is to research what other functionality might useful for the service. This might not be necessary to do, since the service had a single purpose, but if this is an API that other clients could consume, maybe we could try to expand more on what could be possible with this service.

Final words

What's left to say? Probably a lot, but I don't want to make a 20 minute read, since that'd me just rambling on and on and on about certain topics related to my experiences on this.

I would like to end this post by thanking everybody who participated in the OSD700 course and gave their support to bring Telescope to version 3.0. Best wishes to everybody!

by Gerardo Enrique at Sun Apr 24 2022 05:08:10 GMT+0000 (Coordinated Universal Time)

Saturday, April 23, 2022


Francesco Menghi

Telescope 3.0

Telescope 3.0 is finally out! For the final week of OSD700 I finished two issues that I mentioned in my previous blog post and some other small fixes.

I successfully added the YouTube and Twitch sign-up page and it now looks like this:

This PR was supposed to only add support for Twitch while Thanh was in charge of adding support for YouTube. However, after Dave's feedback, I ended up adding YouTube in the sign-up instructions and finally Thanh reviewed and approved my PR.

Many fixes

When something doesn't work and you try to find a solution, often the fix turns out to be something really simple. This is what happed with our Turborepo CI cache implementation.

The reason why the CI cache didn't work is because we were missing the team flag in the command. I previously assumed it was not required by looking at the Turborepo docs but Felix (the maintainer of the GitHub Action that we're using) came to the rescue and suggest the fix. In the end, this simple PR made everything work!

ESlint change

Our ESLint implementation included TIMING=1 to get a performance overview each time we lint our code. This however felt unnecessary most of the time so I decided to make it a separate command.

Going forward we have two ESLint commands that are run using Turborepo:

  • pnpm lint: runs ESLint in our repo.
  • pnpm lint-time: runs ESLint in our repo with the TIMING=1 option.

Docusaurus fix

Another quick fix I made was removing these two options from Docusaurus:

showLastUpdateAuthor: true,
showLastUpdateTime: true,

These two options allow you to use the git history to retrieve information on when a page was last updated and by whom. Unfortunately since our Docusaurus app lives inside the monorepo, .git is only found in the root of the monorepo. So, when building the Docusaurus app in a Docker container, there was no git info being copied over and it resulted in a long list of errors.

The end of OSD700

This was the last week of OSD700 and last week of my studies at Seneca. I cannot believe I already reached the end. It feels like yesterday that I was learning things like navigating around a terminal or learning the basics of html, css and javascript.

Both OSD600 and OSD700 were an incredible and unique experience. Last semester I learned how to use git and GitHub and made open source contributions during Hacktoberfest and beyond. This semester I participated into taking the Telescope project from version 2.0 to version 3.0 with an amazing group of people.

Working in open source feels completely different than getting an assignment done and submitting it before a deadline. When you open a Pull Request, your code is there in the open ready for other people to take apart, find potential problems and eventually merge. In my experience, there is often a peer with better experience with a certain tool or technology that can provide help or guidance when you are stuck.

Looking back at my first post of the semester where I wrote some goals for the class, I feel like I have accomplished what I setup to do. I worked a lot on monorepo tooling, I used Docker, NGinx, React, Docusaurus and more.

I am excited for the future of Telescope and how the next group of students will influence its direction. I plan to stick around and continue to make contributions to the project!

by Francesco Menghi at Sat Apr 23 2022 22:29:47 GMT+0000 (Coordinated Universal Time)


Diana Belokon

Getting a little bit technical!

After expressing my feelings about the OSD700 course, and all of the experiences that I went through this term, I thought, "what a good way to end the term..." But then I remembered that I have to speak about what I managed to get in for release 3.0!

So, I have one PR that took a while, despite being a single line change, and I would like to go into extreme detail, because it ends up in a really interesting lesson for other people to learn.

Flickering stars

So, back in release 2.9, I developed a simple "star field", which is a simple animation showing the GitHub contributors as profile pictures. I got the initial code and idea thanks to the Coding Train :D

One weird quirk that the star field had is that the profile pictures would "flicker". For some reason, a picture would get small, then appear super big, and then get small again, acting like normal. It is somewhat difficult to describe, but this would give a flickering effect that was very annoying and somewhat horrible to look at...

Adventuring into the solution

We did what a responsible developer would do: file an issue and leave it for later :)

After a while, the issue was picked up by @jerryhue. He mentioned something about rewriting the math logic so that it would fix the flickering issue.

I wasn't super sure what that would mean, because the logic itself was fine, it was just a weird flickering that was happening. I thought to myself, "how is that related to the flickering?"

After a loooooong while, the team decided to assign me the issue in a meeting. After the meeting ended, I asked @jerryhue why he couldn't solve it. He told me that it was difficult for him to rewrite the math logic since he was struggling to think of an easier approach to do the star field simulation.

Rewriting the solution or fixing the current one?

I wanted to know why he was so obssesed on rewriting the logic. He thought that the current logic was fine, but that it could be improved. One reason for this improvement is to make the stars a little more "believable", and the zoom effect would look better.

I understood his reasons, so I decided to improve the code to achieve something like that.

Of course, I didn't manage to do much...

I was back at square one. I was feeling frustrated. "Why I cannot solve this?", "why is this fix so difficult to happen?" I was almost going to give up and give it to someone else, however, I decided to try one last time.

If I was going to debug it properly, I wanted to write it in the p5js debugger, since I just wanted to focus on that specific part, and I didn't want to wait for Nextjs to compile every time I made a small change.

When I was trying it out, I didn't want to include the images at first, because I had to make an array of images urls that p5.js and I was feeling a little lazy, so I just went with an array of circles that will be drawn on the screen.

The only change I did was the circles being drawn instead of GitHub profile images, everything in the code was the same, and to my surprise, the flickering stopped.

I tried to understand the situation: how come the flickering occurs only with images and not with regular circles? At that moment, I decided to load a single image that could be reused for several stars and understand why this was the case. As expected, the flickering started to happen when I drew images.

I was utterly confused, since the behaviour of an image was clearly different of that from a circle. So, now that I knew that it would only happen with images only, I had to understand when that flicker would occur.

I wrote some code to make note of the values, I noticed something very consistent: the flickering wouldn't start at the end of a star reaching the maximum level, but right at the very start, when the size of star was 0.

When I thought about this, I was still confused, why at that moment? To answer some of my questions I went to the documentation to read upon the behaviour of the image function, the function that draws images on a canvas. Nothing much was mentioned when the width and height of an image was 0.

Since nothing was documented, I said to myself, "it wouldn't hurt to try in the live examples available", and so I did. I wrote the size of the image to be 0, and all my doubts started to disappear. If you try to write an image to a canva with dimensions 0, it would use the original dimensions of the loaded image!

So, that meant, if I wanted to avoid the flickering, I had to stop making the size to be zero, and instead to be something somewhat close enough. This line came out as a result:

this.z = p5.random(p5.width * 0.1, p5.width * 0.95);

Instead of getting a value that might be zero, I will instead a value that is never zero, and thus, get rid of the flickering once and for all!

Conclusion to this whole adventure

To be honest, this helped to learn a huge lesson. Sometimes it is harder to create a solution with the tools that you have rather than create your own tools to create your solution, however, at the same time, it is valuable to not have to create anything from scratch or rewrite everything. Even though it would have been nice to rewrite the math logic so that it looks better, I prefer this solution much better!

by Diana Belokon at Sat Apr 23 2022 17:21:35 GMT+0000 (Coordinated Universal Time)


Hung Nguyen

Final week

Overview

It has been a long run for my second journey to Open Source. It is time for the release 3.0 to land. I am happy that finally, we already had a mobile app with posts service, which is the main feature of Telescope. On top of that, student quotes and Telescope's about screen (which is all about Telescope) has been finished with an integration of Supabase. This contains a really hard word of James, Kevan and I. I would like to say thanks to all reviewers who gave us a lot of useful feedback.

Last PR for this semester

This PR is about styling post component for the post service timeline.

A header has been added to each post. Now we can see that they will contain the name of the author, and also the title is now more similar to the original post in Telescope web app. Not only that, if you click on the author's name, the app will redirect to author's blog timeline, where you can find other blogs of that author. Based on some feedback, I still have to fix the text font and padding for the page.

Final thought

This semester is great, I have more chance to contribute open source community, especially Telescope. Also, I have more chance to work with React Native which is really new to me. Although it is the end of the semester, the app still need to have some improvement, so if I have some times, I will definitely keep working on it.

by Hung Nguyen at Sat Apr 23 2022 17:11:37 GMT+0000 (Coordinated Universal Time)


Gerardo Enrique

Before the finale...

Although I should have written this post a week ago, I'm glad I am writing it now, since I feel that I can properly divide the topics I would like to talk about in this and next blog post.

For the people that lack context: OSD700 is a course option given to students taking certain programs in Seneca College. As part of the course work, you are supposed to contribute to this open source project that the school gives support to: Telescope.

I would like to talk more about this, but I want to leave it for next post, as it would be the final one that I would have to write on Telescope directly. It does not mean that I will stop writing about Telescope, instead, I would stop writing about it in the context of the OSD700. I may approach different ways on how to talk about Telescope, and experiment a little bit more!

Either way, this post and the next one are the opposite sides of the same coin, that coin representing the finale of this "emotional" character development arc of mine (not really). For the last post, I would like to do a recap upon my adventure on this course, what I hope I had learned, what I managed to contribute to Telescope as a whole, and my aspirations going forward.

However, we gotta talk about what we are going to ship in Telescope 3.0, right? Since old habits die hard, we still are going to talk about the PRs that I managed to contribute for 3.0, as well as what else went into release 3.0. Also, it is not like this release is going to be the last one in Telescope, there's still plenty of work to be done!

So, what's up with the release?!

Well, this release was wild! It is kind of unfortunate that we couldn't have a calmed release for the final one (instead, the alpha release was much calmer...). There are a couple of problems that are going to be addressed throughout the weekend, because the team was starting to feel tired after a long meeting session where we prepared the remaining PRs for merging.

What did you manage to submit for 3.0.0?

Most of the PRs I did for this release were small, since I was taking a step back to focus on other courses that I had to pay attention to.

The most remarkable one would be moving the Star field element that @dbelokon worked on #3149 to docusaurus. This one was fairly straightforward to do, since I had to do something similar in the past (throwback to what I had to do related to WebAssembly!). I did not add any new code, but instead I adapted it to Docusaurus. I had to follow up with a few fixes, since the original PR was missing something that nobody noticed until it was time to build and deploy the docs.

You can also include the documentation for the beloved dependency-discovery service, that describes the API of the service in a more detailed manner.

And that's pretty much it. I did work on other PRs, but they were small fixes to stuff I had to fix so I could other tasks.

by Gerardo Enrique at Sat Apr 23 2022 09:47:18 GMT+0000 (Coordinated Universal Time)