Planet CDOT (Telescope)

Wednesday, February 19, 2020


Nathan Misener

My first feed

Well I wasn't thinking I'd be doing this blog anytime soon, but...

by Nathan Misener at Wed Feb 19 2020 17:27:56 GMT+0000 (Coordinated Universal Time)

Sunday, February 16, 2020


David Humphrey

Older, but not wiser

I made a mistake recently. You'd think I'd know better (in fact, I do know better).  Actually it's something I preach about so often, I thought I'd better do some penance in the form of a blog post.

A colleague emailed me to tell me that all of her students' were having trouble with an assignment I'd created.  For the past few years, I've been trying to use a "get these unit tests to pass" style of programming assignment for early semester students.  It takes more work upfront, but reduces my marking time significantly.  It also gives a much stronger feedback loop for students, who can treat the problems like a game, where getting things to pass is how you win.  I've got an assignment starter repo where I've been refining this technique.

For this particular assignment, I wrote a test where students had to work with an Array of users, convert their birthdate (String) to a Date, and figure out who was the oldest.  It worked great, until it didn't.  My colleague had her due date later than mine, and one day this particular test started failing for everyone.

The problem is that I used new Date() (i.e., the curent date) as one of the pieces of logic in the test. In other words, I used something that will change with every test run, but assumed it was stable between runs.

Working on code in Mozilla taught me most of what I know about writing tests.  Writing a good test is hard, especially when any aspect of it depends on timing.  A test that relies on any kind of timing data can pass or fail depending on (you guessed it) the timing of the test run.  If you get it wrong, you get random failures; and if people start to think that these failures are ignroable, you quickly lose confidence in your test suite, and it all falls apart: "We can ignore this failure, it often happens."

In my tests, I used a bunch of mock user data generated by Mockaroo, and based on the data I got, I figured out who the oldest user was manually.  At this point, I should have pinned my test date to the date I wrote the tests vs. creating a current date.  If I had, I would have eliminated the variance and my test would have always worked.

If you want to become a great programmer, work on writing tests.  It's often more difficult than writing the original code because it forces you to think about the conditions in which your code is executed, and the environment within which it gets embedded.

by David Humphrey at Sun Feb 16 2020 19:11:04 GMT+0000 (Coordinated Universal Time)

Thursday, February 13, 2020


Bowei Yao

An Anonymous Chatroom

I started on this project about a week ago. It’s supposed to be an anonymous chatroom program. Right now there is only one chatroom available to the whole world, but later it will be divided up into mini rooms.

Here are the links to the app:

frontend: https://chatroom-client-bwyao.herokuapp.com/

backend: https://chatroom-server-bwyao.herokuapp.com

frontend code repo: https://github.com/ragnarokatz/chatroom-client

backend code repo: https://github.com/ragnarokatz/chatroom-server

Technology stacks used:

frontend: socket.io, react.js, fingerprintjs

backend: socket.io, react.js, mongoose

database: mongodb on atlas

Since I’m taking BTI425 (the web course for frontend technologies) this term, this is the first time I’m doing an application with react.js. I feel like this app is a good practice for getting familiar with the new materials that I’ve learned in the past month or so.

by Bowei Yao at Thu Feb 13 2020 03:58:18 GMT+0000 (Coordinated Universal Time)

Tuesday, February 11, 2020


David Humphrey

Shipping Telescope 0.6, Planning 0.7

0.6 Release Requirements

I've just been going through the students' work for our 0.6 release of Telescope.  In a previous post, I threw down a gauntlet and told the team I needed to be able to dogfood their code a staging server.  Specifically, for 0.6 I said:

  • I have to be able to go to our staging server at http://dev.telescope.cdot.systems/.  If I can go to https://dev.telescope.cdot.systems/ instead, that's a bonus.
  • The site that runs there has to be a GatsbyJS app.  If the data it hosts comes from our GraphQL API, that's a bonus
  • I have to be able to login using our fake SSO service, and it needs to show me that I'm logged in somehow
  • The data hosted in the GatsbyJS app has to be live, and continually updated
  • I have to be able to read everyone's 0.6 blog post describing what they did, and what I need to mark
0.6 Release Realities

How did they do?  Really well, I think.  Over 95 commits went into this release, and here are some of the highlights.

First, https://dev.telescope.cdot.systems is up (notice the S in HTTPS).  Our staging server uses Docker Compose to build and server all of our apps.  We have an nginx reverse proxy to our node app, using automated Let's Encrypt certificate renewal to give us HTTPS.  A lot of this work was done by Josue and Miguel.  We also had an alumni drop in on the project unexpectedly and give us a PR to reduce our main Docker container from 1.9G to 250M!  Thank you Ray Gervais, you must have had a great teacher to know all this :)

Second, we now have a Gatsby frontend app!  A lot of people contributed to this, both with design and implementation work.  A huge thank-you to Ana who led design discussions and to Miguel, Cindy, Krystyna, and James who did a lot of the heavy lifting to get the app built.

Third, I'm able to login using SAML2 based Single Sign On.  James and I collaborated a bunch on this, with help from Josue to get it running smoothly on our staging server.  This is tricky stuff, and I'm glad we finally have it working.

Fourth, the data in our Gatsby app is live.  The data is currently using our REST API vs. GraphQL endpoint, and we'll work on that in 0.7. Having a staging box up now, with real data, has been amazing.  One amazing thing it has already enabled is that Cindy was able to get Zeit Now Preview Deployments working.  Every time one of our devs sends a PR, the Gatsby frontend gets built and hosted on Zeit.  It makes it so, so much easier to review frontend changes.  We also host master on Zeit, so it's easy to check on how things are working as we merge.  And since it pulls data from staging, we get to test it using real data.

Speaking of our backend, a number of our team members focused on backend app improvements, from Redis to REST to GraphQL to tests, and literally everything in between.  Thanks to Rafi, Calvin, and Julia for making sure that the data flows properly.

Fifth, I was able to mark this release by reading (almost) everyone's blog posts on our staging server running our app.  This dogfood tastes pretty good!

Some other interesting stories from this release:

  • We got hacked! I made some slides to show what happened in more detail, but the summary is that we kept hitting this weird bug on our staging box where Redis would go down every day.  It turned out to be hackers trying to sync with our Redis instance over the internet.  No harm done (we don't store any data that matters on that instance), and we got to discuss how to expose and protect ports with Docker.  I'm going to call this "a huge learning opportunity."
  • The different ways people like to work became more evident in this release.  Some of our team love using Slack and an "always-on" style of collaboration.  For others, this approach is distracting or impossible with other life commitments.  I see this mirrored in lots of open source projects and tech companies, and learning how to best include developers at both ends of the spectrum is really important.
  • We created an automated release process, and did our first proper release.  In the next release, this will also kick off a build.
0.7 Planning

Yesterday we met during our weekly Triage meeting to do a planning session for 0.7.  In the previous release, I picked what was in- and what was out of scope.  This time I wanted to share that responsibility across the team, and make sure we were hearing from everyone about their priorities and interests.

Release 0.7: James says it's easy!

Julia took excellent notes, which you can read for more detail.  During the meeting James repeated phrase "It's Easy!" so often that the rest of the team has nicknamed the 0.7 Release: "James says it's easy".  At a high level, here's what it needs to inlcude:

  • I should be able to log out of the app (SSO login works, but logout doesn't)
  • A push to master should deploy to our staging box automagically
  • We have a prototype UI for doing search
  • We can search for posts by Author
  • We have a prototype for an authenticated user to be able to Add a Feed from our frontend to the backend
  • Our Posts need to be better styled (images behave, fonts, etc)
  • The UI needs to look closer to our design
  • Our frontend app should make use of more Gatsby patterns and plugins

Luckily, James says this will be easy.  Let's find out!

by David Humphrey at Tue Feb 11 2020 18:49:39 GMT+0000 (Coordinated Universal Time)

Monday, February 10, 2020


Calvin Ho

Fullstack Developer Wanted??

I don't get these job postings.

There was apparently an infamous post written on Medium by a big named guy declaring Fullstack is dead. 

After working on Telescope, I think so too. The field is so broad now with ever increasing amounts of technology a developer is supposed to know. A typical fullstack developer posting on sites goes as follows

We're looking for a fullstack developer with the following experience:
  • Node.js + Express/Java/Golang/Python/.Net
  • Javascript, CSS, HTML5
  • Angular/React + Redux/Vue
  • Docker
  • No SQL: MongoDB/ Cassandra/ DynamoDB
  • SQL: Oracle/ MySql
  • AWS (may or may not include serverless functions)
  • GIT
Bonus if you have experience with the following:
  • Redis/Memcached
  • User Experience design
  • GraphQL
  • CI/CD: Travis, Circle
  • Kubernetes
  • JWT/Auth0
WTF? One of the students in the class has spent pretty much a semester if not two, just learning and trying to implement SSO for Telescope. I mean I understand this is a wishlist, but this is insane. You might as well hire my Open Source Prof, @humphd at this rate. Thanks.  

by Calvin Ho at Mon Feb 10 2020 06:30:59 GMT+0000 (Coordinated Universal Time)

OSD700 Release 0.6

Worked on a few issues for Telescope for this release:

GraphQL documentation for Telescope
Issue can be found here

This was fun, I knew nothing about GraphQL going into this. By the end of it I was even hacking away at nested queries on my own branch which we haven't even implemented yet in Telescope. I always shied away from documentation, because I rather be coding. I guess it is true. To see if you've really learned something, you should be able to explain or teach it to someone else.

GraphQL filters for Telescope
Issue can be found here

Aside from documenting on how to use GraphQL, I also took on an issue which required me to rewrite some queries to allow filtering and support future search functionality for the front end. This taught me some pain points about GraphQL as I always assumed it could do stuff like a traditional database, fro example: select * from posts where posts > provided date or something along those lines. GraphQL cannot or rather is unable to support this without installing another library, I ended up writing my own logic to do filtering and pagination.

On a side note, I also learned people can publish scalars(GraphQL typing) in packages for other people to download and use.

Include logic to filter inactive feeds and invalidate inactive feeds for Telescope
Issue can be found here

Another issue I started over the Christmas weekend and finally finished. This went through a few iterations and in the end it was suggested to scrap the current code written in favor of a more Redis oriented solution.

Refactor promises for plumadriver
Issue can be found here

I was suggested this issue by our prof, since I did quite a bit of work on refactoring promises for Telescope. It was an interesting experience reading typescript code and contributing to another repository after a few months of just working on Telescope.

by Calvin Ho at Mon Feb 10 2020 01:55:05 GMT+0000 (Coordinated Universal Time)


Krystyna Lopez

Release 0.6

Going back to Telescope.
Between release 0.5 and 0.6 I was working on the Telescope project. I was assign to few issues during the 2 weeks period. 
My first issue was to add test to make sure our application starts. This issue reference to the page where possibly I can look for solution. Ideally issue-363 should solve: a) make sure that our app is working b) allowed to hit correct root.

After reading examples and documentation I have learned that there is 'start-serve-and-test' npm package that starts server, waits for URL, then runs test command; when tests end, shuts down server. 

How to use this package:
Install this package npm i start-server-and-test
Add command to the script. In my case Telescope already has command to start server and command for the tests.
This way I just had to combine both commands into one that I named "CI" and add localhost. 
Indeed this package did not solve the intention of the issue I was working on. This package run tests on the background 
when the server is started. Even I did not solve this issue as author idea was but I have learned something new.

My other issue that I worked for Release 0.6 was cleaning up package.json on the frontend and gatsby-config.js. 


Mainly this issue include removing boiler plate from the existing files.

I also was assign to another 2 issues. One of them is to create React component that would count the number of words in a blog post and other is to investigate on the Mercury parser. React component is ready and soon I will create a blog post once I'm done with test for that and Mercury parser deserved separate blog post because there are many interesting things that I would like to talk about.  

by Krystyna Lopez at Mon Feb 10 2020 01:05:18 GMT+0000 (Coordinated Universal Time)

Sunday, February 9, 2020


Ana Garcia

непредвиденный

Along with with writing and the arts, I have always enjoyed languages. I like the patterns, the similarities, the differences, and the coincidences. During the past summer, I realized it was the first time in two years that I didn’t have anything to do. Four whole months where I could actually work on my hobbies without thinking about deadlines or grades. The perfect time, or so I thought, to give learning a language a shot.

Things got out of hand. Quickly.

To make a long story short, I added way too many languages to my Duolingo app. Among the many, I had Japanese and Russian. They were the only languages I chose that had an alphabet different from the Latin Alphabet, but was my experience with both quite the opposite!

The Japanese module started with you learning hiragana, one of the building blocks of the Japanese writing system. You would learn a character set, words you could make with that set, and then when it came to test your knowledge, you could click on the nice tiles and be done.

Duolingo Japanese Modules

Russian wasn’t like that.

When I decided to do Russian, it threw me right in, with no explanation of the Cyrillic alphabet. There are 33 letters in it, and not many of them sound the way you would expect.

See the P? It’s not our type of P

I did my best, but it is not easy to write something when you don’t know what each letter sounds like. Unlike the Japanese module, I actually had to type and spell. Which meant I had to get a Russian keyboard for my phone.

Eventually, I realized I could only get so far. Sure, I could wing certain words, auto-correct saved me a couple of times, but there was a lot of fundamental knowledge that I was missing. The more words I learned, the harder it was to remember them all. The more I tried to move forward, I would be stopped because I could not spell.

It was the realization that made me give up on Duolingo. That, and the fact it made the process into one about gaining points. Learning and understanding become secondary when there’s a number you need to gain.

You might be reading this, wondering what it has to do with my progress with Telescope, and well…

Much like I lack the Russian fundamentals, I lack the fundamentals of a big part of web development. I’m currently at a place where I am stuck, hacking away hoping something sticks. Up until recently, I dealt with problems everyone else had dealt with because they had been problems in our assignments. Now, working with Telescope, and all the stuff it interacts with, you can’t really go to stack overflow and find an answer.

It feels difficult asking for help, since I feel that I should know a lot of what I don’t. It doesn’t help that the more the project grows, the more daunting it becomes to learn. I didn’t exactly set myself up for success either, since I signed myself up to work on a complicated, for me, area in my capstone project for my first milestone. Too much research to do, too little time.

My first issue and pull request for 0.6 were very easy; adding a react version to the eslintrc file.

PR: Added react version to eslintrc file | Issue: React Version Warning

What I did:

  1. Added settings to the eslintrc file that specify the React version to
  2. Prevents the warning about React version not being specified

The code:

settings: {
    react: {
      version: require('./package.json').dependencies.react,
    },
  },

Where I got the fix:

Warning: React version not specified in eslint-plugin-react settings

My second pull request wasn’t.

Issues: Add static route to serve new GatsbyJS frontend | Add building frontend for staging and production deployment

Pull Request: Point / to the gatsby public folder

At first my issue was that my machine could not run docker. Well, it can, it just takes a toll. I can only have VSCode and one firefox tab open at the same time. Usually, when I’m coding, there are at least 7 tabs. It didn’t help that my capstone also had a milestone going on at the same time, and that I was working on the frontend. I couldn’t do all of them at the same time, and I tend to push projects to the side when I don’t feel confident.

Eventually I realized I could work on what I was doing without Docker – my capstone milestone was passed, Telescope’s design was frozen – so no more excuses. I read a lot, but it was more to rule out stuff than anything else. At the end, I ended up adding this code to the Gatsby side of things.

What this code does, its to create a proxy on localhost:3000, where our server is, and process requests.

My problem arose from the fact that I could only successful point the server to the public folder, which only appears once you build gatsby, which was not what we wanted. We wanted to be able to see the changes made to the frontend in development mode immediately. No matter how much I researched, I couldn’t find anything. I also didn’t know if I should change any of the routes or if I needed to make my own. This was probably my first time looking at Telescope’s server, and I’m still not quite sure at what a lot of it does.

Eventually, we decided to just point to the public folder and have docker build Gatsby automatically. I also added that code, with a lot of help from David and Josue, which you can see below.

This was my first time dealing with Docker, which was interesting.

Going forward, I want to do more work implementing the frontend. I feel that I will be able to work on more issues that way. I also have a better understanding and a better base, which will free up my time to review more pull requests. I feel like I need to do more reviews, and I’m often too busy working on my own issues to do reviews.

Despite the difficulty, I’m glad I decided to continue working on Telescope. The benefit of deadlines and grades, is that you need to do work. And in order to get things to work, sometimes you need to learn. At I’m still trying with web dev, even if the same can’t be said about Russian.

Speaking of Russians, turns out that while we were working hard for release 0.6, Telescope suffered an attack from a Russian and Ukrainian IP. Can’t say I expected that.

by Ana Garcia at Sun Feb 09 2020 05:06:32 GMT+0000 (Coordinated Universal Time)

Aside: Black Hole of Time

When I started working with Telescope, I didn’t want to.

I’ve never been the best coder, nor I felt comfortable with open source. So when it came time to start setting up Telescope, I had no clue about what should be done and I didn’t want to go into any of the discussions. There were about 60 people all desperate to file bugs, and way fewer who knew what they wanted to do and had very strong opinions about it. I might or might not have also redirected all the emails from GitHub into their own folder in my email. To be honest, I can’t say coding is my passion. I think its neat when things work, but I don’t “eat Java for breakfast.”

There had been a discussion about the what technologies we should use early on. I didn’t care for it. Not until I found Gatsby – and at this point I’m starting to feel like an ad. I like working on the front-end and designing interfaces, it is combination of the accomplishment I feel when I code something successfully and doing graphic design. I wanted to work with the fronted of Telescope, it would be something that I knew and could lead on.

And so we’re here.

For release 0.6, I finally started designing.

I did some research about modern design, determined to bring Telescope away from the 90’s vibe. With Adobe XD, I soon spend way too much time on it.

For the first iteration, I made a hero banner, main page, and a menu drawer.





After the initial design, the discussion moved away from the main telescope repo to a team discussion that can be found here.

Some of the feedback was:

  • Smaller paragraph width
  • Smaller navbar size
  • Having author and date visible
  • 21px for main text
  • darker colours

From that feedback I made some improved designs:

One of the problems I ran into, was that the blog section was too small. There was too much white space surrounding the blog, and adding another blog besides it would not bring an optimal reading experience. So I decided to add a participant section – mostly as a place holder – to balance the content.

Afterwards, I took an on-demand edit approach to the feedback I received.

Eventually, I decided when the design should be frozen. I also realized that an on-demand approach is not particularly good way to deal with feedback. Specially since I wasn’t letting people know my design decision. Going forward, I would like to implement a better system about feedback on designs, as well, as one about posting the designs.

I found that the current way is not great for following conversations, and it doesn’t allow people who aren’t on-line at the moment to give any feedback.

Overall, I’m happy with what I made and I’m looking forward to doing more designs.

by Ana Garcia at Sun Feb 09 2020 03:04:11 GMT+0000 (Coordinated Universal Time)


Cindy Le

OSD 700 Release 0.6

As promised, I completed Issue #517 and Issue #530. 517 dealt to integrating Zeit Now which was a little weird to do because in order to actually set it up, I had to be the organization owner or repo owner and I was neither or those so I had a lot of help from @humphd . Everything is working as it should for Zeit, we’re all able to see the logs and deployment information (for now). I did a small demo in class for this and hopefully didn’t confuse too many of my classmates. I’m not very good at multitasking so I had to stop talking to type XD. I did a little dry run the night before and realized that running Telescope on my VM would be way too laggy for the presentation so I decided to run it directly on Windows, luckily for me I didn’t even need to have Docker running to run npm run develop since it only dealt with the frontend stuff and didn’t need the backend stuff. I don’t like developing on Windows because I have to mess with my Hyper-V settings to have Docker to run and that meant I would lose the ability to run my VirtualBox and Android Studio. I can’t can’t have both; it’s either Docker only or everything else I like. Plus every time I change the setting, I needed to restart my laptop.

530 was just documenting four domain. I thought documentation PRs would be very easy for everyone to review and approve but nope, not everyone is interested in reading documentation so most of the time it just sits there for at least a couple days. Doesn’t matter if it’s 4 lines or a whole guide, it still takes at least a couple days to get approved.

The third Issue I picked up was #642 , it was a late pick up but I knew I had to do something for the frontend so I picked the easier of the two (OR SO I THOUGHT). I glanced over the frontend React code for the Zeit demo and I was like “ah this is different, I just need to edit a line for my demo, I’ll look at the rest later”. I also procrastinated because the frontend was being restructured so I waited for it to be merged before I did anything. Come Thursday night, I’m actually reading the code and was like “What happened? Is this what React looks like now??? What’s ({ className, drawerHandler, scrolled })?”

const Header = ({ className, drawerHandler, scrolled }) => (
  <header className={`${className} ${scrolled ? 'sticky' : ''}`}>
    <nav className={`${className}__navigation`}>
      <div>
        <HamburgerButton click={drawerHandler} />
      </div>
      <div className={`${className}__title`}>
        <a href="/">Telescope</a>
      </div>
      <div className="spacer" />
      <List items={items} className={`${className}__navigation`} />
    </nav>
  </header>
);

Header.propTypes = {
  className: PropTypes.string,
  drawerHandler: PropTypes.func,
  scrolled: PropTypes.bool,
};

I’m used to this

class Welcome extends React.Component {
  render() {
    return <h1>Hello, {this.props.name}</h1>;
  }
}

I tried my best to follow the frontend React structure but it was hard because I didn’t truly understand the code. Part of me wanted to just write my component my way but I got a few errors and was like “okay, I guess I can’t do it my way, I’ll try the original way again”. I managed to get the bits I wanted on the screen but I’m currently having some trouble styling it so I decided to throw a PR of what I have and I got a couple helpful comments on how to achieve what I wanted. I haven’t gotten around to trying it because I wanted write this blog post first.

In our next Triage meeting, we’ll be unfreezing the frontend design so I’m pretty excited to see what cool new features we’ll be adding.

I also like to do a little blurb about my progress in my capstone project even though it’s unrelated to open source but it’s something that I was passionate to work on for months. I say ‘was’, past tense because we switched technologies and suddenly I’m not as motivated to put 100% into my work on the frontend. I admit it’s sloppy but every time I add something new from the palette, I immediately get an error and I’m like “I KNOW I’LL GET TO IT AFTER I MOVE YOU IN THE RIGHT POSITION, STOP YELLING AT ME QQ”, it really just interrupts my workflow so much every time I get an error.

On another side note, our legacy Planet website officially passed away so now I’m stuck reading posts on our staging box and it’s totally weird because it’s not mobile friendly at all; every post is super skinny and there’s literally 1-3 words on each line which is so hard to read… It was alright previously but William suggested to add to padding (it probably looks nice on desktop but for me, it’s absolutely terrible since I’m probably the only one that reads the site on mobile) Ugh. I miss our Planet, my second most frequently visited tab on my phone says “Decommissioned” now I’ll keep it for a couple more days then I’ll delete the tab. I used to open Planet up pretty frequent and I almost added it to my home screen and I was like “eh it’s not an app, it’d be weird to do” but then I was like “WHAT IF IT WAS AN APP??? I COULD BUILD IT!!! I COULD BUILD IT IN REACT NATIVE”. Hey Dave, if you managed to get to the end of my blog post I would be totally down to build a Telescope React Native app so if Seneca is hiring students for innovative projects, here’s an idea and I’m up for hire after April 2020

by Cindy Le at Sun Feb 09 2020 02:51:50 GMT+0000 (Coordinated Universal Time)


James Inkster

60%

There’s a plethora of things that are 60%.

Such as a

60% Keyboard
60% Cocoa

Illinois is requiring that adults between the age of 25-64 that 60% of them will have a college degree.

60% of People can’t go 10 minutes without lying.

And…

Telescope.

Telescope has been what I’ve been spending quite a bit of time on, and I ran into a couple issues this time around. I feel like I was exposed more on my knowledge, and showed me I still had a huge uphill battle to continue to learn as much as I can. This project is now at 60%.

I’m operating at I think about 30% of my potential. What do I mean by that? I mean my writing has to improve, my reading has to improve, and I need to struggle more frequently.

Twice in the last two weeks I struggled a lot, both times resulting in incredible late nights. I think I have to fine tune my approach. I think I have to liken it more to working out.

When you first start working out, you do things incorrectly, certain muscles aren’t working right, and its a bit of a mess, you’re also struggling a lot.

Fast forward to two weeks later, you are probably closer to doing the exercises correctly (or pretty darn close.), but the struggle is just there, just on a different page.

So the past couple weeks, I was taking a different approach, enjoying tutorials that I was attempting, such as Gatsby, building SSO’s, and more. The problem is I don’t struggle during these tutorials, I follow them. That’s not a struggle. I would relate this to when you are a kid, and you sit on your dads shoulders, he’s struggling but you’re having a blast. You gotta hang on to his head, but let’s be honest, it’s enjoyable, and it’s not THAT difficult at that age, proven by the fact you are a kid.

These aren’t helping developing my muscles, it can teach me a basic, but it’s not teaching me to really understand what is going on. It’s teaching me to follow instructions.

Going into the day to work on my telescope issues I felt I was going to put my knowledge to paper, and I’d have an enjoyable time coding.

Well, while coding, I realized I did not understand the flow, most errors from ESLint and Prettier I had begun to rely on them to fix my issues. It’s weird thing when a tool can be pushing you in the wrong direction, especially when something gets added, in this example it would be “this fixes your spacing, and makes your code look organized”. However, it does more then that, it can fundamentally change the way you’re code is written. This includes functions and certain situations when you need to pass variables through different functions, there’s multiple ways to do it, and that can be fixed.

These aren’t mentioned in the tutorials, partly because they’ve solved all those issues on their own, prior too even creating the tutorial. I’m not at that level, and I think that’s what my goal is. To be able to write a tutorial with a great understanding to help other people understand the problems and functionality of what you are doing.


So 0.6 release had me learn quite a few different things – how a SSO to Express works. how your front end to express to SSO works, gatsby, functions more in-depth, react more in-depth, the need to struggle more frequently, and not just make time for the people you’re helping, but make time for you to go seek help as well.

This is a hard thing for me to grasp, especially when I run micro teams for my projects, if I’m not around I feel like a zoo happens, when in reality they probably don’t work at the pace I prefer to work at, which isn’t always a bad thing, because faster pace can mean lesser quality, it’s a very fine line to balance.

My next release is going to be interesting, 0.7, only because I feel more free from working on purely SSO things. Given the bulk and the basic functionality is done, I think the next steps will be more front end oriented to set-up, and then back end will be 0.8.
I’m never sold on my performances, and partly because I think I need a trophy to consider myself a winner from time to time. It’s very hard to say you’ve done your best, unless you have a definitive pinnacle moment, and usually that results in a trophy or some kind of reward and you hit a bunch of criteria.

This kind of journey is different and one I’m not used too, I don’t know the criteria, but I know we are moving forward which is difficult in itself to grasp, because in sports which I’m used too. Moving forward is usually defined as scoring more goals then the other team. Very clear cut.

And keep in mind, I don’t think you can set yourself a clear cut goal, but a clear cut vision is more important, and those are different things. My hopes, is you’ll be able to go in there, set your feed in ‘settings’, change from light mode to dark mode, it would maintain these settings, and you can set your avatar to your post.

In terms of self-reflection, there’s always more, and I think I have to change my style for the next release a bit to help those around me become a little less stressed during the final hours.

If debugging is the process of removing software bugs, then programming must be the process of putting them in.

Edsger Dijkstra

by James Inkster at Sun Feb 09 2020 01:05:20 GMT+0000 (Coordinated Universal Time)

Saturday, February 8, 2020


Rafi Ungar

Telescope: if you bite off more than you can chew, share it with the dogs (🐶🥫)

For the past several months, I have helped progress the development of Telescope, Seneca College’s new open source blog feed aggregator. Until now, my contributions have mainly been things I could (and did) do alone: fixing small bugs and implementing minor feature requests.

Two weeks ago, I set two new goals for myself: to take on larger issues facing Telescope, and to collaborate more closely with my fellow Telescope contributors on such issues.

What I bit off ()

I had originally planned to take on and resolve three issues by this time—each at least as large as any issue I had previously taken on:

  • Issue #595: Write Jest tests for src/backend/web/routes/opml.js
  • Issue #294: Keep feeds in Redis synchronized with wiki over time
  • Issue #624: Deal with 429 responses from medium.com, and other feeds

It’s perhaps also worth mentioning two issues that I intended to work on but that ended up being inadvertently resolved before I had a chance to: #562 (which I ended up resolving in my PR #550) and #608 (which @humphd ended up resolving in his PR #618).

What I managed to chew ()

My previous work for Telescope involved completing the implementation of Telescope’s API endpoint (currently /feed/opml) that shares its aggregated feeds as an OPML file (PR #394).

However, at that time, no tests had been written for that endpoint. As I had prior experience writing tests for the first iteration of Telescope’s inactive blog filter, I felt quite confident and well-equipped to tackle this issue (#595) on my own. So, I did so, resolving it with a PR (#644).

What I could not chew ()

Issues #294 and #624 are decidedly outside of my current zone of comfort and experience (e.g. with Redis)—ideal for collaborating on with other Telescope developers. Unfortunately, I was not able to address either issue fully in the time I had.

Over the next two weeks, I plan to prioritize these two outstanding issues, and to put our Slack channel to good use in order to communicate and collaborate on each issue.

What I chewed instead ()

During our latest sprint, I wanted to contribute more than a Jest test file for the OPML endpoint, so I piled on a larger (but manageable) issue into my food bowl:

Issue #638: Add tests for /feed/* routes

This issue was a logical extension of the work I had just completed, but on a larger scale: I was to write Jest test cases for not one additional backend route but four!

How (not) to yelp for help

I had relatively little difficulty writing the tests for our RSS, ATOM and JSON feed endpoints. However, that changed when I attempted to test the structure of the data output by our /feed/wiki endpoint: my test seemed simply unable to fetch the data (whereas I had no issues doing so using our live URL)!

I was simply stumped by this issue, and assumed some complex underlying issue, so I decided to package a plea for assistance into the slides I prepared for our weekly progress report, hoping to solicit assistance as a part of my presentation. However, the feedback I received then made me realize that I should have simply brought up the issue in Slack, and it would have been resolved in minutes!

Sure enough, shortly after I brought up the issue in Slack, I received the help I needed. I was then able to complete the structural testing for our /feed/wiki endpoint and open a pull request for all of the aforementioned work (PR #670).


From now on, I really do plan to make much better use of our communication tools, including Slack, to seek out help when I need it!

Lending a paw

Over the last two weeks, despite my admitted shortcomings in asking for help, I have still definitely progressed with my goal of upping my collaborative efforts, mainly by reaching out to others who need help (and, in turn, being reached out to!)


During the final hours of our last sprint, I took it onto myself to review our Gatsby frontend’s login functionality. This was my first time building and interacting with our new Gatby frontend, and I naturally ran into quite a bit of Gatby-related issues that I finally resolved thanks the help of the PR’s author, @Grommers00 (who also kindly sat down with me earlier to seek help reviewing my PR #644).

Furthermore, in the process of completing the review of the login code, I wrestled quite a bit with terminating a rogue Redis instance (which I later learned was running as a service on my VM; thank you to @manekenpix for walking me through the troubleshooting process for this)!


When last Monday’s triage meeting found itself short a leader, I volunteered to facilitate it, and continued to do so even after that leader arrived. I enjoyed that experience a lot more than I thought I would, and look forward to leading this next triage meeting, as well, alongside the aforementioned leader who subsequently volunteered to help me out to return the favor.

What’s cooking now?

During our current sprint, I plan to continue to take on larger issues; to collaborate more efficiently; to continue to take Telescope a little bit closer towards that highly-appetizing Release 1.0!

by Rafi Ungar at Sat Feb 08 2020 23:01:50 GMT+0000 (Coordinated Universal Time)


Lexis Holman

Having Fun

Hello again, In my previous post I mentioned what I have been up to and some of the project ideas that I had for my time off. So today You get a quick update and then I wanted to write about a recent problem that I was able to solve in a somewhat different way.

First up, in my last post I talked about a TRS-100 series portable computer from sometime in the 1980’s and mentioned how it is still (supposedly) able to interface with modern computers via. it’s rs-232 port. Well I got my hands on a USB to serial cable and a db9 to db25 adapter and then was able to connect the two; I really wish I researched more before trying this. The screen went instantly dim and the power light came on with the switch in the off position (not a great sign.) Pretty sure this is because I was supposed to connect the two with a null modem in between the two points, so who knows how much voltage I sent through what lines. Long story short it still works and it was a lesson not to assume anything with old computers, apparently after some research I learned that standards at that time where a bit of a mess.

Will post more when I acquire the rest of the required materials, it never ends.

As for my progress on my Prescott/Presler space heater, err I mean cluster idea I am only thinking of ideas at the moment. Half of the would be compute nodes are in storage in a non-disclosed facility (a garage) so I can only really think of ideas at the moment. However I am thinking that if a PXE environment were established and an existing 32-bit Arch Linux O.S. image extracted from one of the machines hard drives, may provide me with the required lightweight 32-bit distro for this project. This idea is still basically a rough sketch on a napkin somewhere at the bottom of a drawer, but it’s not the first time I’ve maintained this style of infrastructure; will keep you up to date on any progress.

Challenge:

And Finally the thing that initially inspired this blog post. This is not really intended to be a write up of a recent challenge I attempted, but instead a report on what I feel is a somewhat unconventional method of completing the challenge. Maybe one day I will consider doing write-ups for events and challenges but for now will stick with ones that I find interesting.

For this challenge I was given a binary executable with these attributes:

Format: executable ELF x86_64, Dynamically linked and stripped of debug symbols.

Size: 6.2K

The requirements are to get a text string from the executable file, and there where no restrictions in terms of Static vs. Dynamic analysis of the running executable.

Begin:

To start I opened the file with IDA (free version) where it was very obvious that this is a very simple and small program consisting of less then 10 subroutines including main; and excluding system functions like printf().

The first thing that is noted is that in main, there is a string being built on the stack from the stack base pointer where the program moves characters one by one into memory locations to form the first password string from (seemingly) random memory locations. This may be to prevent a simple string search from providing the “first” password of this program. This string consists of integer values that are obviously ASCII encoding (0x41 to 0x57) and (0x61 to 0x67), uppercase and lowercase respectively, so to get this string it was just a matter of converting the hex to ASCII in the order it is appended to the stack.

From there the program makes a call to scanf() and loads rdi and rsi before making another call to strcmp(); finally the program makes a JZ (jump if zero flag is set) one branch being a call to exit() the other to the a continuation of the program.

This second block, still within the same function as the previous string get and compare as noted earlier, this one starting at loc_400925. However in this instance instead of comparing two strings as previously done, with one string already existing in memory and the other user input; This block gets user input and then calls another function which builds the string AFTER as we will see.

Note here that we are setting a time which presumable is just seeding rand (as we programmers know is a common seeding method), a call to srand is then made and filling a buffer with 0x14 (20) random characters. Does this program really expect us, the user to input a password that hasn’t been generated and is also made of RANDOM characters? We have two options here. We can go into the routine to correctly analyze the algorithm that creates the flag or we can do this with dynamic analysis. Dynamic analysis meaning we start up the program and see if out assumptions where right; it’s never fun to find out you spent all day tracing a rabbit hole that leads no where.

Lets fire up a bash terminal (remember this is an ELF format not a PE.) and run it, however here is where I am going to break for a bit of a warning For those of you who may want to get into R.E. challenges as a hobby I will note that even with a reputable source for the binary image you are still running a fairly unknown piece of software on your computer. I seriously recommend setting up a lab environment before running anything off the internet. Know your tools, your environment and what you are doing; it’s your responsibility if you make a mistake.

Now, lets get back into it, fire up the program and enter our first string. As expected we are provided with the output from loc_400925, remember the two atstrisks before the call to printf in the first image above? Now for the tricky bit, how am I going to enter a password of random characters that hasn’t even been generated yet? Well we can assume that the string is hardcoded, albeit obfuscated in the binary itself; which with enough persistence we could step through and try to get an understanding of; however I have some other ideas. I will try and trick the program into taking an incorrect branch and run through the code dynamically; my first choice for doing this is going to be to set the two registers that strcmp() works on to the same value.

Using GDB we can modify values with the “set” command. For our purposes it takes the form of:

(gdb) set $rdi = $rsi

Then resuming execution, the program seems to fail and exit the program. The reason for this I am curious to explore however for time reasons I will move on to my next idea, which is to set the Zero Flag bit in the Eflags register using the same method. Note that because we are setting a bit within the register, we need to preserve the state of the rest of the flags.

Heres a method that I found to do this:

https://stackoverflow.com/a/31339372

Which goes something like this:

(gdb) $eflags |= (1 << 6)

They do it a bit different in the link above, but you can explore that more yourself. But basically this shifts the value one or a simplified version 00000001 by six, making it 00100000 which represents the zero bit within the eflags register.

Checking this with GDB command “inspect registers” or “i r” before we see:

eflags 0x202 [ IF ]

After running the above modification:

eflags 0x242 [ ZF IF ]

Which indicates that it should work; although, when we run it, it fails with no output; maybe we’re missing something, but I have one more idea. Being completely certain that controlling this single jmp zero instruction is the key to success I am going to attempt to modify the binary itself, to do this I will use dhex but you can also use hexedit or whatever your preference is.

Hex editing:

First make a copy of the original binary as we are modifying the binary itself! If you don’t have a copy, you may not easily be able to revert back, again this is your responsibility to understand if you choose to attempt this type of modification to your binaries.

To start we are going to have to find the exact bytes we need to modify to make this possible I will do this by highlighting the jump instruction I want to modify in IDA. From there I got to view-> subviews -> hexdump, on this screen the highlighted hex value is our jz instruction. This is where you will need to be familiar with popular opcodes, you can look these up online but over time will be able to recognize ASCII and what not. Anyways the byte we need to modify is a jz or 0x74, we can convert this into the opposite with a jnz or 0x75.

Here is a related table:

http://ref.x86asm.net/coder32.html

Anyways, Hopefully I didn’t get to far off topic with the opcodes, I apologize and may rewrite some of this for clarity but for now I hope you can understand.

Back in IDA with our hex dump open and our opcode highlighted, lets copy and paste from that area approximately 8 to 16 full hexadecimal byte values to a notepad. Next we open our hex editor, for me as noted that is dhex where I type “/” to initiate a search:

Cursor onto Searchstring (hex) and hit enter, then enter the bytes that we copied from IDA; finally cursor down to “GO” and hit enter:

Our screen is then set to the opcode instruction (not guaranteed however.) At this point we can type over the 74 with 75 hopefully changing jz to jnz.

Once that is complete we hit F10 and continue on:

Okay lets run it, Here we should now be able to enter any random value however when the strcmp() returns a fail instead of going to an exit routine it should (again hopefully) take the branch to our flag routine. This means when we run it, as long as we don’t type the correct 20 random characters we should branch into the flag generation logic.

Interestingly Enough:

We now have our Flag.

Conclusion:

Although it is not my intention to (at this time) do write ups for the challenges that I am using to pass the time till I find employment, I did want to share this one with you as it is a good example of how fragile and amazingly simple computers are once you strip away the gui and other obfuscating and abstracted elements.

Note that I have removed the actual flags themselves so as not to completely give away points for free. However if you choose to invest time in this hobby in a safe way then hopefully this can give you an idea whats going on, although modifying the binary is a bit unconventional. Also it may be a good idea to go back and try to get a handle on how the flag string is actually being created, this is all about learning after all.

Cheers!

ElliePenguins

by Lexis Holman at Sat Feb 08 2020 20:30:34 GMT+0000 (Coordinated Universal Time)

Friday, February 7, 2020


Miguel Roncancio OSD600

Collaboration in Open Source

I’ve mentioned in previous posts about my involvement in a blog aggregator project, Telescope. In our most recent sprint, our goal was to have a minimal viable product, or as others like to call it, dog food. It didn’t have to be pretty, it just had to work.

There were a few components that needed to be completed in order for this to happen, including setting up a reverse proxy using nginx ( thanks @manekenpix), adding SSL certificates with Let’s Encrypt, and connecting the front-end to the back-end. I was fortunate to have had a hand in all of these all of these parts of the code base as I feel I have learned a lot from both the front-end and back-end tasks that I completed. But there is a bigger lesson that I took home from this sprint, and that is collaboration.

In the past, my involvement in open source projects has been to get an issue assigned to me, then go off on my own and try to solve the bug. Because of the amount of work that needed to be done as well as the complexity of it, this approach to collaboration would have been highly ineffective. As a result, I was working with multiple people on the same issues. This was ideal because everyone brought some knowledge that only they had which helped keep the project moving forward.

One tool that is especially useful for this type of scenario is git. Although this is not how we collaborated on this project, in hindsight it would have been useful to create a branch in one of the forks of the contributors and have everyone involved contribute to that branch. In this way when a pull request is made, everyone will get credit for what they contributed.

But what about if only one person committed the changes but many others contributed?

Well, fortunately GitHub allows you to co-author commits.

Overall, I really enjoyed this kind of collaboration and hope to do it more often.

by Miguel Roncancio OSD600 at Fri Feb 07 2020 11:59:05 GMT+0000 (Coordinated Universal Time)

Sunday, February 2, 2020


Cindy Le

OSD 700 – Release 0.6 Progress

For 0.6, I’ve assigned myself Issue #530 which is pretty much documenting what endpoints we currently have. This shouldn’t be hard to do. As I look at my calendar and notice that today is the first of February, I also note that I’ll be entering Week 5 of this class in a couple days and I haven’t written a single line of code (how I managed to do that, I don’t know). I’ve done 4 weeks of documentation, and a ridiculous amount of testing and code reviewing. I didn’t intend to do docs for so long, I really had a masterplan going. I originally was gonna do docs for TWO weeks then hop over to do frontend work but I wanted to see a design that I liked before I commit. I’m pretty shallow, if it looks beautiful, I’m all over it. After the presentations on Wednesday, I saw that there was a dire need for frontend developers and the design proposed was good. I can tell the designer put extra care into the little details. I can definitely work with people who care about UX and want to make each page visually balanced. I was added onto the Adobe XD project as a collaborator which lets me see timestamps of revisions and I can see that she spent at least 6 hours yesterday working on the updated design after getting feedback on Wednesday, she also spent 2 hours making on-demand edits for two very nit picky developers in the GitHub discussion. Wild. I made some minor edits to the mockup before yesterday so I’m hoping I didn’t contribute to her time lost fixing my boo-boo’s on Adobe XD, I’m still pretty green.

On to my next Issue #517, this one is super cool and if I can get this running, we can automatically deploy frontend pull requests so everyone can see them. Reviewing will be easy peezy. I just need to actually start it…

I’ve been noticing how hard it is to initiate projects even with a plan. I just starting coding for my capstone project today in JAVA (oh lord, I can’t believe I said that in a sentence). I still hate Java with a passion. So far, I’ve been the only one to push code into our repo… I personally think we’re super behind but my other two members think we’re good on time. They must be following a different calendar than I am.

This is literally just me monologuing in our team Discord chat

Future employers, don’t offer me a Java job… I’ll only accept it if I’m really hungry.

by Cindy Le at Sun Feb 02 2020 01:30:13 GMT+0000 (Coordinated Universal Time)

Saturday, February 1, 2020


Josue Quilon Barrios

Telescope: Nginx and Let's encrypt

So, Telescope is getting closer to become the successor for PlanetCDOT. If you still don't know what Telescope is (shame on you), check this out.

Nginx & Let's Encrypt
I spent this week researching nginx to add a reverse proxy to our staging box so we could add some cool features. Also, I worked with another contributor (@miggs125) to add Let's Encrypt so we can use SSL.

After dealing with some configuration issues, we managed to get it to work, and once we merge the necessary changes, we'll have Telescope using SSL. I think I've probably said it too many times, but once again thanks to Telescope, I had the change to learn new things: I got to learn how to setup a reverse proxy and how to add SSL to a site.

Feed endpoints
As I mentioned in previous posts, Telescope has a REST API and GraphQL. The endpoints to serve posts were implemented some time ago, but we were still lacking a way to serve feeds to our frontend. Since I collaborated adding the endpoints for posts, I thought it'd be a good idea to work on the same issue but for feeds. I added endpoints to request feeds and one feed, along with some tests for those endpoints.

GraphQL filters
I also did some work for GraphQL. Another contributor (@c3ho) is working on adding filters to our Apollo server, so our queries will be more flexible and efficient. It's a very tricky task, so I offered to help since I worked on the initial addition of Apollo to Telescope. We've made some progress and, to be honest, if he manages to include all the functionalities we want (taking advantage of some Apollo's features), the improvement will be significant. Seriously, really interesting stuff.


by Josue Quilon Barrios at Sat Feb 01 2020 18:37:00 GMT+0000 (Coordinated Universal Time)


David Humphrey

How to do design reviews

This week a number of the Telescope developers have been trying to nail down a design for our frontend, such that we can make progress on our 0.6 release, and as Cindy put it, "give @humphd his dog food."

We've spent a lot of time doing code review, and studying how larger organizations do their own reviews.  As the weeks and months have ticked by, I've watched as the students have gotten more comfortable iterating on a piece of code, lost some of their hesitation for sharing their work before it's "perfect," and learned how to frame criticism as a helpful tool instead of something meant to hurt.

Now we're starting to focus on the design, UI, and UX of our app, and we're doing that in the context of our regular development flows: filing issues, creating pull requests, giving and receiving reviews, merging and shipping code.  Immediately we've hit problems.

The first thing I noticed was that design discussions in GitHub didn't move us forward.  While many of our team members could be assigned any of the bugs in our current project milestone, it's not so easy with design tasks.  We all want the final result to be good, and we're invested.  But without some sort of ownership, and ability to make choices, it's hard to even get started.

The next thing I noticed was that many of us (myself included) lack the language necessary to give effective critique of design work:  "This is too dark."  "This is too cluttered."  "I don't like such and so." If I saw our developers giving code review feedback like this, I'd be frustrated, and ask that more actionable feedback be given, such that the developer can move forward.  Surely we owe our designers the same?

I am not a designer.  I have the utmost respect and appreciation for good design, and if you'll allow me to be so bold, I also have good taste.  But these do not allow me to create good designs.  So I reached out to a few of my former Mozilla design colleagues to ask them how they handle design review.  I was interested both in how to give and receive feedback with a design.

  1. Cassie McDaniel, who is now Design Director at Glitch, sent me an article she wrote for A List Apart, "Design Criticim and the Creative Process."
  2. Darrin Henein, who is now Director of UX at Shopify, sent a fantastic blog post he'd written, "Practical Design Critique: how to give and receive feedback," which tries to capture his learning from working at Mozilla.

Both of these are so well thought out, and come from years of experience designing for large companies and projects.  I was going to try and summarize them, but as I read deeper into both pieces, there's no way I can do that effecitively.  There is so much great advice and perspective in these that I'd encourage you to read them in full as you have time.

As an aside, I've been lucky to work with gifted designers like Cassie and Darrin in the past, and it's such a rewarding experience.  As a programmer and writer, it's hard to describe how amazing it is working with people who are able to connect your code and text into human experiences by designing for the real world.  I hope I'll get to do it again soon.

by David Humphrey at Sat Feb 01 2020 00:58:30 GMT+0000 (Coordinated Universal Time)

Friday, January 31, 2020


Julia Yatsenko

Telescope

This is my last term in Software Development program in Seneca College. One of the classes that I choose for this term is part 2 of DPS class, so I am back to posting about my adventures in the Open Source world.

Last time I wrote a post in a WordPress was more than a year ago and during this time things changed in Seneca’s Open Source (OS) community. Now, almost all the people in my college that are involved in OS are working on a project called Telescope. Before I came to class I’ve never heard about it. My professor and new classmates decided that they would like to test how easy is ‘on-boarding’ process for a newcomers. And I will be a test subject.

I should say that my acquaintanceship with Telescope took place 3 weeks ago, so my emotions faded away. I will not go into details of it, I’ll just list things that changed in Telescope after I tried to ‘touch’ it. My main task for the first week was to go through the documentation of the project and file any issue, unclarity or confusion I spot there. I thought that since everybody in the class was already contributing to Telescope for at least one term, the set up should be easy and quick.

I started with reading about what is Telescope in README.md file. This is were I filed my first issue.

If you are interested in reading an updated, nice and comprehensible version of what is Telescope — here it is.

Next step was the setup. Here is where my torment started. I did not understand a thing there, almost every step was problematic. I can’t say that I went through hell and high water there, but I definitely felt lost and stupid.

“This is my last term and you cannot even set up a project on your laptop” — that was my primary thought. Luckily, it appeared that I’m fine and a lot of stuff in setup instructions were messed up. Here is one more issue, created by me:

I’ve got a reply from one of my classmates that was working on documentation at that time.

As you could understand,  CONTRIBUTING.MD file also went through some major changes.

Overall, I guess it was a tough beginning , but I learned a lot. During the process I became familiar with things like Docker, Redis and .env files and refreshed in memory what you feel when doing OS. Next post about my first contribution to Telescope soon will be here, stay tuned

by Julia Yatsenko at Fri Jan 31 2020 00:32:06 GMT+0000 (Coordinated Universal Time)

Thursday, January 30, 2020


Calvin Ho

GraphQL Nested Queries

The whole point of GraphQL is its flexibility, I can view all the authors in the database and then I can add an additional query that can display all the books by the one author, we call these nested queries. I recently spent an afternoon + evening with @manekenpix to take a look at nested queries in GraphQL for the Telescope project.

We currently have a schema like below
  # 'Feed' matches our Feed type used with redis
type Feed {
id: String
author: String
url: String
posts: [Post]
} # 'Post' matches our Post type used with redis
type Post {
id: String
author: String
title: String
html: String
text: String
published: String
updated: String
url: String
site: String
guid: String
}

Notice feed can also return an array of Post,  to allow nested queries, we have to define them in resolvers after the Query:

module.exports.resolvers = {
Query: {
//Queries are here
},
Feed: {
posts: async parent => {
const maxPosts = await getPostsCount();
const ids = await getPosts(0, maxPosts);
const posts = await Promise.all(ids.map(postId => getPost(postId)));
const filteredPosts = posts.filter(post => post.author === parent.author);
return filteredPosts;
},
},
};

What the above code does is get all Posts in the database, then filter the Posts only returning Posts that have the same author as the returned value of the feed author. For example if I'm running the following query in GraphQL

{
getFeedById(id: "123") {
author
id
posts {
title
}
}
}

and the author name is Marie, the parent parameter that is provided to the nested query (posts) will be the results of the getFeedById which in this case the author name is Marie.

Real life data using a classmate of mine:



by Calvin Ho at Thu Jan 30 2020 15:03:17 GMT+0000 (Coordinated Universal Time)

OSD700 Release 0.5

As part of 0.5 I was working working mainly on two issues and got a chance to help someone start contributing to Telescope.

Async/Await
I've blogged a bit about using async/await to replace our Promise code in Telescope. I started the working during the winter break and was finally able to get that merged this week. The issue actually took a while as it spanned across ~15 files in Telescope and had me refactor functions and tests at the same time, which admittedly was pretty scary. I can say I know how to use async/await a bit better, but there's still a long road ahead!

Kubernetes(minikube)
My other issue I've been working on is a collaboration between me and another classmate @manekenpix to deploy Kubernetes(minikube) on a site for Telescope at http:://dev.telescope.cdot.systems/planet. We've had success being able to deploy services and even got the ingress to work on our own machines locally. However after 5 hours of sitting down and lots of expletives yelled at the computer, we had an issue when trying to deploy it on the machine CDOT has prepared to host Telescope. We forgot minikube runs using a vm on the computer, exposing the service and deployment only really exposes it to the computer the vm is running on. After a bit of researching and asking around on the slack channel we have decided to try a Bridged connection to expose the vm to outside traffic. We're crossing our fingers to have this for 0.6 (hopefully).

Helping a new contributor
Lastly, our professor Dave Humphrey has been actively recruiting students from his other classes to participate on Telescope (where was this teacher when I started learning web development). Which I think is an amazing idea as they gain experience in filing/fixing issues, receiving feedback and just collaborating with other programmers on an open source project. One student took on a great starter issue to standardize the error code on the project. I was kind of a mentor in helping the contributor get their code merged. This gave me flashbacks to OSD600 where our professor pretty much spent the whole semester teaching git and helping students with their git problems. Long story short, the student was able to get their PR merged and is happily taking on another issue. Git is hard and it is even more so when things land daily if not every few hours, the student admitted he used git before, but wasn't used to the pace at which Telescope moved.

The mentoring also taught me something, our professor has started to emphasize the importance of submitting a PR with some work completed instead of a full fledged PR. This way if their current work is starting to go sideways the community can direct the contributor to the correct path, preventing them from going further down the wrong path. For example, the contributor I was helping out kept trying to rebase, apply their changes then commit to their PR all in one go and this kept failing. Instead, I asked the contributor to:
  1. rebase their PR and drop any of the unrelated commits, push code to their PR. At this point we'd review and see what other changes we need to make, such as do we have to use any files from master to the working branch because a file on the working branch is too far gone?
  2. if the current status of the PR looked good, we'd apply their changes to fix the issue and review to see what other changes we need to make
This approach worked a lot better and the contributor got their PR merged today!

In hindsight, I think I've become a better programmer. 4/5 months ago I was attempting to enhance another person's simple note taking app on github.

by Calvin Ho at Thu Jan 30 2020 15:03:53 GMT+0000 (Coordinated Universal Time)


Vimal Raghubir

Converting a custom Darknet model to TensorFlow Lite

Tensorflow’s framework for mobile devices

Intro

Have you tried building a custom Darknet model and realized that it will be very difficult for this model to run on a mobile app? Have you found yourself looking for a way to convert this model into a mobile-compatible format (TensorFlow lite)? Well, I hope that the solution I used will work for you. (Skip to What did I use to convert? section for solution)

Long story short, I managed to train a custom tiny-Yolo V3 model using the darknet framework and need to convert my model to Tensorflow Lite format. If you are wondering why please read the 2 sections below.

What is Tensorflow Lite?

Tensorflow Lite is an open-source framework created to run Tensorflow models on mobile devices, IoT devices, and embedded devices. It will optimize the model so that it uses a very low amount of resources from your phone.

Why do I need to convert?

By default, Tensorflow Lite interprets a model once it is in a Flatbuffer file format (.tflite) which is generated by the Tensorflow Converter. Before this can be done, we need to convert the darknet model to the Tensorflow supported Protobuf file format (.pb).

To simplify that for you:

yolov3-tiny.weights → tiny-yolo-v3.pb → tiny-yolo-v3.tflite.

.Weights -> .Pb conversion

I used mystic123’s implementation of Yolo v3 in TF-Slim to do the conversions. Before cloning this repository, you need to have Python version ≥ 3.5, and Tensorflow version ≥ 1.11. I used Python 3.7.4 and Tensorflow 1.15.

The command to convert .weights to .pb is:

python convert_weights_pb.py --class_names "~YOUR PATH~/class.names" --weights_file "~YOUR PATH~/yolov3-tiny.weights" --data_format "NHWC" --tiny

I strongly recommend using absolute paths for this since it guarantees the script will find the files you need. An example of a good path would be D:/tensorflow-yolo-v3-master/data/tiny-yolo.weights.

After you run this you will get your Protobuf file and now you need to convert it to tflite format. You can also experiment with NCHW but this gave me some issues in the next step.

.Pb -> .Tflite conversion

To perform this conversion, you need to identify the name of the input, dimensions of the input, and the name of the output of the model. If you already know these values then skip to the command below.

If you do not know these values then you will need to download Netron. Netron will help you to visualize your .pb file and will display all of the layers in the model. Once you launch Netron, make sure to open the .pb file created above and look for the name of the input layer, the dimensions below, and the output name can be found by double-clicking on the final layer.

Getting the input layer values with Netron
Getting the ouput layer values with Netron

To run the command below you will need tflite_convert to be installed. If you have Tensorflow version ≥ 1.9 then it is already installed. The command is:

tflite_convert --graph_def_file=~YOUR PATH~/yolov3-tiny.pb --output_file=~YOUR PATH~/yolov3-tiny.tflite --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --input_shape=1,416,416,3 --input_array=~YOUR INPUT NAME~ --output_array=~YOUR OUTPUT NAME~ --inference_type=FLOAT --input_data_type=FLOAT

This should generate a file called yolov3-tiny.tflite. If it did then congratulations because this conversion was not easy! If this worked for you then I am extremely happy and if it didn’t then you should be very close to the solution. Take care and bye for now!


Converting a custom Darknet model to TensorFlow Lite was originally published in Analytics Vidhya on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Vimal Raghubir at Thu Jan 30 2020 04:32:42 GMT+0000 (Coordinated Universal Time)

Monday, January 27, 2020


Timofei @fosteman Shchepkin

Confidence = Competence +

“Confidence is the first requisite to great undertakings.” — Samuel Johnson

Confidence is a belief in ability to achieve goals despite obstacles or defiance and opposition. Is this a kind of unstoppable superhuman (think Silver Surfer) power ? Reflecting back, I simply don’t see any connection with characteristics I was endowed by birthright. Notwithstanding general low self-esteem during my early years, I thought of things that gave me more confidence than others. I aimed to achieve only those, often seemingly insignificant, outcomes that gave me more confidence. I naturally circumvented from things that drained my confidence. What then is the source of confidence, considering this ?

Purposeful thinking and action: competence, congruence and connection.

Competence

“As is our confidence, so is our capacity.” — William Hazlitt

Thanks to Dr. Carol Dweck and her findings about grit mindset, we know that humans are educable in any subject, at any age. Moreover, quite spectacular in benefitting from purposeful cramming or Ultralearning, as Scott Young says.

During one semester of studies of computer programming, I had laid before myself on the desk a daunting volume of Introduction to C++14. Well, one thousand mile track begins with baby steps, right ? Or so I thought, until I was typing away a case studies and taking on challenges so rightfully offered by the end of the chapters. At the end I graduated from that class with an A and apprenticeship with my professor for the upcoming semester.

This means that the more knowledge, skill and ability, or competence - that is, in C++ I gained, then more confident I was flunging open the book next sunrise. I was continually excercising this “confidence-competence loop”, stretching my abilities in flow state (of the coffee coffee valve and myself, indeed). That repetition and stretching lead to more learning — more competence. More competence, then, begot more confidence, and round and round it goes. You can see how I naturally excelled at that class.

The first time you’re at gym, you don’t really know what to do with all the weights and machines, so your workouts aren’t in flow and maybe even awkward. Soon, however, you are entering the room with smile and look others in eyes, not floor or wall. The more you go, the more you know. You weren’t born in the gym, but you got confident coming there. It’s not a fixed personality trait, it’s a muscle built through exertion.

The more time you spent at task, the more learning you did beyond curriculum, the more practice you fixed and skill developed, the more confidence henceforth you have. You know what to do and how to add value at that area of your expertise.

We are learners by nature, and the empowering belief we must maintain, be reminded of, is this ability to learn.

You can learn what is necessary to contribute in your future.

I believe in my ability to figure things out having learnt many things, including arithmetics; several languages including: love, poetry, programming; social and emotional intelligence; the art of seduction; development of frontend, backend solutions; machine learning; guitar; riding a bike; traditional and healthy cuisine…

It’s vitally important to make it your procrastination to appreciate the gap of skills, knowledge you’ve filled in past. Ponder the lessons from your wins. Give credits to yourself and allow the wins to integrate into your psyche. You deserve the power they provision.

And as you strive, begin the practice of self-reflecting over the progress you are marking off in planner. What did you learn ? What did you handle well? What do you deserve to give yourself a path on the back for ?

by Timofei @fosteman Shchepkin at Mon Jan 27 2020 16:07:11 GMT+0000 (Coordinated Universal Time)


Lexis Holman

Free Time and My Project(s)

Hello again friends, It has been some time since I have made a blog post. Mostly because I have graduated from my program (yay!) and I am getting antsy to dig into SOMETHING productive.

After piecing together next steps in terms of my career it is time for me to dive into another project. However, instead of for grades and credits I will be able to work on something for myself, which sounds wonderful but has actually been difficult. This is because what should I work on? Should I apply some of what I learned in school to add features or attempt another optimization for some selected open source project? Where to start, what to do.

Well to begin I have mostly been working on learning things I did not have time for while I was in school. Taking the time to climb further down the rabbit holes that I’ve wanted to explore sooner, this mostly has involved one of the best purchases I’ve made:

https://nostarch.com/tlpi

I absolutely recommend this book.

Other things I have been playing with include making an attempt at learning some basic electronics. By using this book:

https://nostarch.com/arduino

For those of my readers who may not already be aware, you can get great deals on tech E-books through this site, while also helping the charities of your choice; check it out:

https://www.humblebundle.com/

Now back to what I have been up to with electronics, the above mentioned book gives you a good introduction and eases you into reading electronic schematics. Paired with lots of images of electronic components seems to be a good resource in getting started with principles of electricity and simple circuit design. This has been fun but is a bit of a hard reminder why I dedicated myself more to code and not learning to wield a soldering iron, but it would be really nice to be able to fix that broken 1541 drive.

Otherwise, most of my time has been dedicated to practicing applied elements of Linux development, such as building kernel modules from scratch; something I was not able to keep up with while in school. I also learned a neat trick during an online course for lower-level manipulation:

A neat trick:

This involves a union of an unsigned type and a bit field, which creates a type that can be both interacted with as an integer (for example) and also have its individual bits manipulated to modify that value. It goes something like this, and could be a good asset in someone’s personal library:

Union {
   Uint8_t byte;
   Struct {
      uint8_t  b0     :1;
      uint8_t  b1     :1;
      uint8_t  b2     :1;
      uint8_t  b3     :1;
      uint8_t  b4     :1;
      uint8_t  b5     :1;
      uint8_t  b6     :1;
      uint8_t  b7     :1;
   } bits;
} ubit8;
// instantiate, manipulate, display:

ubit8 value = 256;
value.bits.b7 = 0;
printf(“Value: %d\n”, value);

It should also be possible to extract bit values based on the integer data that byte is set too. Play with it and see what you can make it do, also because it is a union it shouldn’t take up any more space than a standard uint8_t; but I may be wrong, computers/standards get weird at this level.

Some Other ideas:

Recently I came across an opportunity to acquire a lot of (old) computer memory modules (ram) about 8 gigs worth. Now you might think this is not much at all, considering most modern laptops ship with 8 to 16 GB give or take, but I can’t emphasis enough: “old,” like LGA775 old. To clarify when I was learning computers some time ago I acquired as many Pentium 4’s (Actually more Prescott/Presler era) as I could get my hands on, some of which only having 256mb of ram (great if Arch Linux didn’t deprecate 32 bit.) But they did and I have been trying to think of what to do with these machines.

Well I have a lot of ideas but nothing that’s going to run in that environment. Now with this resource in hand, and a massive thank you again to this person for not throwing away usable hardware just because of age. I might be able to bring some of these machines back out of retirement.

My one idea for this is to find a usable Linux distro for 32 bit and then attempt some openMPI clustering, (crazy I know.)

 

Another thing was coming across my old Tandy trs-102, here’s some info:

https://en.wikipedia.org/wiki/TRS-80_Model_100

This thing is brilliant, runs on AA batteries and has a really impressive keyboard. The best thing about this is that has an rs-232 via. db25 port on it, meaning it should be able to interface with my modern systems. Other people in the vintage and tech community have made this work effectively and there are projects to make this work such as:

https://www.cinlug.org/files/db/trashtalk/index.html

(I don’t personally know the validity of this yet though.)

This could be a fun project as well and would be about time that I learned more about serial protocol, although I love this machine and it’s not like you can just get them off the shelf at RadioShack anymore. It does have an “extended” warranty apparently though.

 

My Personal Project

Finally the other reason for this blog post is to say that I am hopeful to dive back into my personal UserData library. This library started out as an “I love to code,” “make C style linked lists accessible” type of project, which may be the best way to describe it. Here’s a link:

https://github.com/ElliePenguins/UserData

My thoughts are that this library will hopefully be more than just a linked list library; instead, it is made of linked lists of linked lists where each node has metadata nodes attached. From these data structures I am foreseeing the ability to add, modify or delete entries. To have this library be responsible for handling any data within any program a dev may want it for. Basically just create a head node, call an initialization function and then retrieve and manipulate data via. predefined functions. At least this the direction I am pushing this block of code; but ultimately I just love to code in C and if someone finds it useful along the way, that is great.

Project status: I have just added a Makefile (finally) that allows a developer to both create the shared object and optionally create a simple cli program that uses the library for development and debugging purposes. You can look at this Makefile for more info, it’s pretty straight forward. Otherwise I am still getting lots of segfaults, some of which can take time to replicate due to the nature of the internal linked lists of linked lists ( a lot of strange memory handling ).

Hopefully between now and finding employment I should be able to keep marching forward on this. Not just for the experience of maintaining a complex project myself, but also for a reason to get up in the meantime.

Specific Next task: Re-familiarizing myself with the code and the call stack during different times of execution; then, try and determine best way to make this somewhat usable.

 

Wish me luck,

ElliePenguins

by Lexis Holman at Mon Jan 27 2020 00:32:23 GMT+0000 (Coordinated Universal Time)

Sunday, January 26, 2020


David Humphrey

On Telescope 0.5 and Dogfooding

Winter 2020 Teaching

This term I'm teaching a bunch of introductory web programming classes (WEB222), and also the second open source course (OSD700, DPS911).  I really enjoy teaching all of these courses, and doing them together is an interesting challenge.

The courses are completely different, and exist at opposite ends of the students' time at Seneca (early semester vs. final).  Having my head in both the beginning phase of web development and also in real-world software on GitHub, is a fantastic way to build my skill and empathy at working with the students.  In a given day I'm as likely to spend time helping people learn how to use Arrays as I am debugging complex Arrays of asynchronous Promise chains talking to Redis, deep in a node app.

This term I'm working on doing some updates to the web curriculum, partly informed by my (better) understanding of where the students need to be by the time they get to the upper semesters.  I'm, also starting to think about the summer term, and a new course I've been asked to lead on Progressive Web Apps (PWA).  This course will be jointly offered with a university in Denmark, and we'll be hosting their students and professor for a few weeks in July/August.  I'll write about it more in the coming weeks, but I'm excited by the challenge of something new.

Telescope

Every time I do the second open source course it looks different.  This time my students wanted to keep going with the Telescope project they started in the fall.  Telescope is a blog aggregation system, built to replace our old blog planet.  I'm happy to let them take this approach because the project uses so many interesting modern technologies and techniques, including:

  • Multi-process node.js web apps
  • Redis
  • Job Queues
  • Docker, Docker Compose, and hopefully Kubernetes
  • GatsbyJS, React, and the requisite Frontend Universe
  • REST APIs and GraphQL
  • Modern web tooling (eslint, Prettier, Jest)
  • SSO with SAML2
  • XML Parsing
  • Continuous Integration, Automated Testing, and Continuous Delivery
  • Security (secrets management, SSL, sanitization of user content)

The list goes on.  It's fun code to work on, and the app is starting to "work."

This week they got the staging box up (thank you Josue, Calvin, and Miguel) .  You can try it at http://dev.telescope.cdot.systems/planet if you want.  We don't have the real frontend yet, but I've ported over the old Planet UI to work on the new backend.

Community

One of the areas I want to push the students in relates to community involvement.  A common problem I'm seeing is confusing being good at development with being good at open source development.  If the former is primarily about computers, the latter is primarily about people.

We went from 60 students in the fall to under 10 in the winter, so those that are still active on the code suddenly have a lot more responsibility and code to maintain.  We've also added some new teammates who haven't worked on the code before.  This is an excellent opportunity to highlight pain points in our docs, workflow, and culture.

To that end, there's been quite a bit of auditing recently on our docs and developer experience (thank you Cindy, Julia, Rafi, and Ana), and if you're interested in working on the project with us, begin with the README and CONTRIBUTING guides.  See if you can follow along, and if not, please file an issue.

I've had the students run a weekly Triage Meeting, to go through our backlog and help keep reviews and issues moving forward.  They've been doing a great job burning through old issues, and keeping detailed weekly notes for those not in attendance.

I've also been receiving lots of requests from students not in the course to have me help them with open source and provide mentoring.  I've been using the Telescope project as a way to funnel new Seneca people into open source, since it gives the open source students a chance to work at mentoring, reviewing, and working with people who are remote.  We've had a number of early semester students get their work merged this week, which is wonderful.

But we haven't yet reached the point where everyone feels comfortable to work as a community instead of as a collection of individuals.  This takes time, and requires trust to be built.  I'm hoping that in the next few weeks we'll move from "these are my issues" and "this is my PR that needs review" to "these are our issues" and "these PRs need our review."  Every day there's a bit more activity on Slack, a few more people asking questions, and increased confidence for people to file issues instead of assuming they just did something wrong.  

A Challenge for the 0.6 Release

One of the areas where we struggled during 0.5 was in student blogging.  This is a bit surprising, given that we're building a student blogging aggregator app.  I've long known that blogging is a great thermometer for student work: if you have nothing to write about, it's usually because you didn't do enough.  The inverse is also true, and blogging is easiest when you're busy doing cool work.

To help smooth out some of the problems I've seen this month, I'm going to try an experiment in the coming release (0.6), which I'm codenaming "Grade A Dogfood."

"Grade A Dogfood"

For the 0.6 release (and onward), I'm only going to mark student work that can be accessed via Telescope itself.  If I can't do the following, I'm not going to assign grades for a release.  The requirements are as follows:

  • I have to be able to go to our staging server at http://dev.telescope.cdot.systems/.  If I can go to https://dev.telescope.cdot.systems/ instead, that's a bonus.
  • The site that runs there has to be a GatsbyJS app.  If the data it hosts comes from our GraphQL API, that's a bonus
  • I have to be able to login using our fake SSO service, and it needs to show me that I'm logged in somehow
  • The data hosted in the GatsbyJS app has to be live, and continually updated
  • I have to be able to read everyone's 0.6 blog post describing what they did, and what I need to mark

If our dogfood isn't good enough that I can eat it, I'm not going to mark it.

If the students are good (and I believe they are), they'll use the triage and planning meetings to focus their work, and divide things up among all the team members; they'll also enlist the community for help.  Hopefully I won't go hungry!

by David Humphrey at Sun Jan 26 2020 19:57:11 GMT+0000 (Coordinated Universal Time)

Saturday, January 25, 2020


Ana Garcia

2020/01 Collection: Working on Telescope

Prologue

“Something just flashes into your mind, so exciting, and you must out with it. If you stop to think it over, you spoil it all.”

L.M. Montgomery

I spent my three weeks off deliberating whether to take the class following DPS909. On the one hand, I enjoyed the experience it gave me…on the other, there weren’t a lot of projects I was interested in working on. While localization helped me last semester, it took a while to get pull requests accepted and there weren’t that many projects that I could find that still required Spanish translations.

Additionally, there isn’t really a way to improve the quality of my requests, other than through volume…and I have my limit when it comes to editing.

So I was at a cross roads. I already had the six courses required for my sixth semester. I already took a professional option course. I didn’t need to take another open source class, I could just move on and do open source on the side.

And then I showed up to the first class.

Half an hour in, I was registering myself in the course. What can I say other than humphd made a very convincing introduction to the course?

My goal for the semester would be to work on the front end of Telescope and use my creative skills so that the UI looks as best as it can.

Fast in every way that matters

“‘How did he happen to do that?’ I asked after a minute.
‘He just saw the opportunity.'”

F. Scott Fitzgerald

When I decided I wanted to work with the front end, I knew I wanted to use GatsbyJS. I had only done translation for it, but I wanted to use it and Telescope was the right opportunity for it.

GatsbyJS is a React-based framework that generates static sites. It utilizes popular web technologies such as Webpack, GraphQL, ES6+ JAvaScript, and CSS. This allows Gatsby to not have too much of a learning curve as it is made with technology that programmers are familiar with.

Gatsby also has a big community and plugins that allow us to use already made solutions rather than create our own for scratch.

Furthermore, as an increasingly popular framework I thought it would be good to implement it and use it. It felt like valuable experience and worthwhile for the project.

How I Learned to Stop Worrying and Created a Gatsby Site

“To learn something new, you need to try new things and not be afraid to be wrong.”

Roy T. Bennett

For the second week, I decided to learn the basics about making a Gatsby site in order to convince the rest of the team that Gatsby would be a good fit. The process what’s pretty easy, and while css is never entirely painless, I felt pretty good after making some quick sites. I go in to more detail in my blog post about creating Gatsby sites, which can be read by following the link in the title.

An Unexpected Journey

“The great enemy of communication, we find, is the illusion of it.”

William H. Whyte

The day was Wednesday and I was ready to add Gatsby to Telescope. A simple “Hello World” starter from the Gatsby docs would be more than enough to get the team started with the front-end. I figured a bunch of issues could be filled out after the blank slate was merged and I could get the ball rolling.

Having gone through all the issues during a triage meeting, I saw that there were a few contributors eager to help with the font-end and the UI. Issues #511, #512, and #513 made me optimistic about being able to have a good interface by the time release 0.5 came about. After all, the strength of open source was the numbers. The many can do more than the one.

It…didn’t quite worked out that way.

When I looked at the Telescope repo that Wednesday morning, I noticed someone else beat me to it. A pull request that read “Initial frontend work for GatsbyJS app.”

It was then when I first realized I needed to communicate more about what I was doing. I had realized some time before Wednesday that I needed to keep up with the conversations going on Telescope – as I had missed important discussions about how we wanted Telescope to look.

But just reading and giving your opinion is not enough. Communicating what you are doing and what you are going to do are also important.

This was only the beginning.

The initial template from the pull request looked like so:

While not the “Hello World” starter I had in mind, It looked like a good base to start with. Besides, I never told anyone how I planned to start the front-end. As far as I’m aware, there are no telepaths working on Telescope.

However, there were some changes requested and it did not pass the tests. This meant we could not merge the pr, which was disappointing as I wanted to work on the front-end ASAP. There was a week left for release 0.5 after all.

I regularly checked the pr hoping to see some changes, ready to review and merge, but nothing really happened until two days later.

It was Friday.

Seeing another comment made me happy. The too red x beside the title of the pr? Not so much.

While the pr had been updated, there was no response to any of the changes requested in regard to the failed tests. This was yet another failure in communication, but also one due to our lack of knowledge about ESlint. I’ll return to this point later, though.

The updates have been changes to UI, so now the interface looks like this:

The weekend passed without any comment. And I admit, it is frustrating to look back and see how much time had passed, and how little I asked for updates on the UI. Because, not only was this putting us behind for release 0.5, what was I going to do for my release 0.5?

I couldn’t work on code that had yet to be merged. At least, when it came to the UI.

It wasn’t until Monday that I talked about the status of the pr and laid out a plan of action. It needed to be a priority – at 5 days, it was the oldest pr of the semester and it was halting the development of the front-end. Additionally, going forward, small issues would be issued to better utilize our contributors, and we’d use Adobe XD for prototyping and getting feedback on the designs.

When I checked the pull request later on Monday, more changes had been requested and the status remained red. Luckily, the changes were more nits that could be worked on later on.

On the Tuesday, @humphd filed an issue to “Add static route to serve new GatsbyJS front-end.” I signed on, and started working on it that afternoon. I was not sure what I needed to do, and the lack of a Gatsby site meant that I couldn’t quite test out what I was learning.

So I decided to make a test site to work on my issue, I had waited long enough. It was then when I finally realized why the other pr kept failing.

ESlint.

Telescope is the only project I’ve worked on that uses ESlint. All my test sites have been made without it, and I had never looked into what ESlint did or how it worked. What I’m saying is that it had never been and issue.

Until now.

So I filled and issue. I waited about 10 minutes, and then assigned the issue to myself. Might as well do something outside of my comfort zone, right?

You can read more about how I dealt with that in this link, and you can see my pull request in this link.

So after all that time, ESlint tests failed the pr because we hadn’t added a React plugin.

Finally, today, Friday January 24th, the pr was merged. Telescope has a new front-end!

2020/01/25 02:03:14 EST

The past three weeks flew by.

A blink, an another day passes by. It certainly doesn’t help having a computer that loves taking its time to run anything. Neither does scrolling through reddit, if I’m honest.

My contributions weren’t quite what I expected. Though I admit, I still learned quite a bit, though not all was technical knowledge. If anything, I now know how much time can be wasted through a lack of communication, or by not being on the same page.

We all have different priorities and different projects we want to work on, so not only is it necessary to speak up, it is necessary to make sure you are understood.

Also. Following up on people is better than just silently anguishing.

On the bright side, I feel fairly comfortable working with Gatsby and I’m not afraid of touching the ESlint file anymore.

However, there is a lot for me to work on for the next release.

It’s fine :’)

by Ana Garcia at Sat Jan 25 2020 08:07:56 GMT+0000 (Coordinated Universal Time)

ESlint: A Brief Guide on eslint-react-plugin

A project I’m working on has ESlint installed. I never really looked at what it did, and only had a vague idea of what it did. It wasn’t until we decided to add React to our project that I was sort of forced to look into ESlint more than I had previously.

If I was being honest, I feared it a bit. I knew the guy who had initially added ESlint to the project, and he didn’t looked too happy when he worked on it. I also remember that it broke a couple of pull requests after it was added. So I was wary.

BUT after having done this, I will say my fears were way overblown. It broke nothing and it took me longer to run the tests than it did to make my fix.

What is ESlint?

“ESLint is an open source JavaScript linting utility originally created by Nicholas C. Zakas in June 2013. Code linting is a type of static analysis that is frequently used to find problematic patterns or code that doesn’t adhere to certain style guidelines.”

Eslint Org (https://eslint.org/docs/about/)

ESlint is supposed to allow the developers to create a style to enforce on their JavaScript code. It is useful because the loosely-typed nature of JavaScript allows for errors that arise from humans and computers not being on the same page.

While you might think you coded one thing, the compiler doesn’t always interpret it that way.

Additionally, it is helpful in open source, since it can get kind of chaotic with all the programmers with their own preferred coding styles.

What is Eslint-Plugin-React?

A philosophy of ESlint, is that it allows for whoever sets it up, to add their desired rules and settings. Plugins are a way to do it.

So Eslint-Plugin-React is a plugin that allows React specific rules, such as recognizing .jsx files. Otherwise, you get errors when you run ESlint tests.

For example, in the project I had been working on, “Error: Unexpected token < found” kept popping up whenever I wanted to write a class. Not a fun message to find when a deadline is approaching. For a moment I thought I just didn’t know how to write a React class.

Installing the Plugin

I will assume you already have ESlint intsalled.

If you have ESlint installed locally, as in only for one project, run the following command:

npm install eslint-plugin-react --save-dev

However, if you have ESlint installed globally, the plugin must be too.

npm install -g eslint-plugin-react --save-dev

Now, you need to make some changes to the .eslintrc file. Remember that your project might have different needs. To better see what your project needs, go to plugin repo. In telescope, we needed to make the following changes:

This section did not previously exist
This section was added to the rules section

by Ana Garcia at Sat Jan 25 2020 08:06:56 GMT+0000 (Coordinated Universal Time)

How I Learned to Stop Worrying and Created a Gatsby Site

When I first thought about using Gatsby, I did what I always do when doing something new: I overthought it.

I spent too much time reading about what Gatsby was and what Gatsby could do, and where you could use Gatsby…instead of making a site. Sometimes, the best way to learn something, is to do it. Theory can only get you so far, and you can only show off so much with theory.

For about a month I did nothing, and then when I volunteered myself to work on the front end of Telescope, I had no experience. I had to fix that quickly, so I could convince everyone else working on telescope that we should use Gatsby.

How to start a Gatsby Site

The first step in creating a Gatsby site, is to install the Gatsby CLI. According to the Gatsby docs site, the CLI is “the main entry point for getting up and running … a Gatsby application.” It allows you to run all the commands you need.

To install it globally you need to run:

 npm install -g gatsby-cli

if you want to see the commands you can use, you can either check the Gatsby docs site, or using the CLI run:

gatsby --help.

Installing it in my computer didn’t take long, unlike the other process, so that was nice.

After installing the CLI, you have some options. If you use:

 gatsby new name-of-project

then the site generated is the default Gatsby theme. It looks like this:

You can find the repo for the default site at gatsbyjs / gatsby-starter-default

Gatsby also offers other official starters, which you can use by using the new command and the url of the git repo. For example:

gatsby new your-site https://github.com/gatsbyjs/gatsby-starter-blog

The starter site templates offered by Gatsby are:

gatsby-starter-blog

gatsby-starter-hello-world

Perfect for a blank slate

gatsby-starter-blog-theme and gatsby-starter-theme-workspace are also available themes, but they do not have a demo available.

You can use other templates that are not starters or you can make your own, to install them it follows the same principle as above. Using the new command and the URL of the repo.

For my practice side, I decided to use the default starter because my React knowledge was rusty. It took a good day, but I got the hang of it and was able to rearrange a site the way I wanted.

While most of the process was similar to building a React app, I found the way to generate a dynamic navigation bar interesting.

In order to do that, you need to go into the gatsby-config.js file, and add menuLinks to the site metadata.

module.exports = {
  siteMetadata: {
    title: 'Gatsby Default Starter',
+    menuLinks:[
+      {
+         name:'home',
+         link:'/'
+      },
+      {
+         name:'page2',
+         link:'/page-2'
+      }
+    ]
  },
  plugins: []
}

By creating the navigation link this way, you can use GraphiQL to make requests, like so:

The query will return JSON objects with the links (https://www.gatsbyjs.org/docs/creating-dynamic-navigation/)
File Structure

But I’m getting ahead of myself, after you run gatsby new, you should be aware of the project structure structure.

A possible file structure, (https://www.gatsbyjs.org/docs/gatsby-project-structure/)

While you can find a more thorough lists of files that are generated for a Gatsby site in the docs, I will focus on the ones I spent most of my time on.

/public – Is a folder generated when the command gatsby build is ran, inside it you will find the files you need to host your site.

/src – Is where you will write the code for your front end. It’s where you will find folders for your pager or your components, among other things you will see on your site.

/src/pages – Components in this folder are pages with automatically created paths based on the page name. For example, a file named lab.js will be found in the path “/labs”. If you want something like “labs/lab-1”, then you would create a folder in /src/pages named lab, and inside it you would create a file named “lab-1.js”

gatsby-config – Meta information about the site can be found here. This where we put the menuLinks for a dynamic navbar, for example. This file also can include information about the plugins used, or where you would specify a path-prefix. For more information, the Gatsby docs are a great source of information.

Seeing What You Have Made

If you want to see how the site looks while you are making it, you need to run gatsby develop. On my machine, the first time run develop when I open my project, it takes a really long time (about 5 minutes or so) . While the waiting time does go down the subsequent times, it still is a bit of a wait.

The upside is, you don’t need to run gatby develop every time you update anything on your site. After saving the changes, the process is reloaded and you can see the changes in localhost:8000.

Once you have completed your site, you can run gatsby build to generate the public folder files that you need to make your website go live. If you want to run the website based on the public folder files, you need to use gatsby serve.

Aftermath

After my first attempt at blog with Gatsby, I also made a portfolio site for another class, which also took me about a day.

My experience had been fairly positive and painless, which made me I feel like Gatsby would be great to use with Telescope, as it is fast to create a static site, and there is a lot of templates that can be used. This allows telescope to look good, without having to spent too much time on insignificant details.

by Ana Garcia at Sat Jan 25 2020 01:21:44 GMT+0000 (Coordinated Universal Time)


Josue Quilon Barrios

Back to Telescope

It's live!

Yes, Telescope is live, only its development version though, but it is live, and you can check it out here. If you have no clue about what Telescope is, go check my previous posts to learn about it. Go, I'll wait.

Ready? So here we are again, trying to add more features to Telescope, learning new things, and getting excited again!

It's becoming a very intense project, so I'll go right away to the things I've been involved with:

- GraphQL

 GraphQL is a query and manipulation language for APIs and a runtime for fulfilling queries with existing data. Adding it to Telescope wasn't easy, but it's done, and now we can take advantage of its features.


- Hashing and encoding IDs

This one was a bit trickier. It was my first time doing this so I had to get a bit familiar with crypto, a wrapper for OpenSSL cryptographic functions, but once I understood what had to be done (and with some extra help from others involved in the project), I managed to integrate it with what we had.


- Minikube and Kubernetes

Minikube and Kubernetes additions to Telescope are still WIP, but we've made some progress with them. I'm working with another contributor to run Telescope using a Kubernetes cluster, but due to our lack of experience, we're trying first with minikube. I think we're getting there, we did some testing and we hit some walls that you're supposed to hit when you're learning this stuff. If everything goes well, and our guesses are correct, I think we'll be able to get it run in a week or two.


- Deployment

This is another very interesting piece. Right now Telescope is running on port 80, but it'd be nice (and professional) to use SSL, right? Well, that's what I'm going to be doing soon. I just started researching Nginx to see how it works and how it can be used with docker-compose (which is what we using for our staging server), and once Ngingx is added to our docker-compose file, we'll try to convince Let's Encrypt that we're trustworthy.

That's pretty much what's happening right now, Telescope keeps growing at a pace that's almost hard to keep up with, but what we're getting out of it in terms of experience and knowledge is simply awesome.

Oh, and we're adding the fanciest toy in the store right now, Gatsbyjs.

Stay tuned!

by Josue Quilon Barrios at Sat Jan 25 2020 04:36:00 GMT+0000 (Coordinated Universal Time)

Friday, January 24, 2020


James Inkster

Half-Full or Half-Empty?

We are currently rolling out a pretty cool project called Telescope. And we are half way through our adventure to get it to be a version 1.0 product! Currently we are at release 0.5, and half way to hopefully what will be a fully functional product. I’ve mostly focused on getting the single-sign on working, and fixing any bugs along the way. However, I was also trying to figure out how to test a single-sign on service!

Single-Sign on Testing

All these are the different articles I had approach to learning how to test Single-Sign On, on top of a surgery last week, and taking these things in, it was quite heavy. partly because a lot of these don’t show good examples of when to use the possible solutions they provide.

I think what frustrated me about this, was the conclusion I came too was I could not add code, and any testing of the Single-Sign On was built into our docker image already. This is something I would never want to do in the work force as I feel like it’s not a good showcase of skills, effort, and well it’s quite a bit of time spent where there isn’t progress for the project itself.

Bugs, Bugs, Bugs

The other issue that I ran into was an issue that actually was some code I wrote a while ago, in a rush to get it finished. I was only able to test it on my local computer. This caused the PR to get merged, so that was one of the first few things I worked on, it also let me understand the routing of our current project…and how much a proper front end will help fix things.

My pull request was here.

It’s mostly just fixing things, but I learned quite a bit about the other aspects, and I made sure to test it out on a different computer this time around to make sure others could also utilize the SSO utility.

Linux on Windows?

We had to create certificates so you can login to our SSO, and that was a problem for our Travis CI build. This could be an on-going issue, and I will have to teach people how to view Travis CI so they can figure out for themselves if one fails.

I think a lot of people currently don’t even read the tests, or have no idea what are going on in them. So I do think this is an opportunity for people to improve there understanding about what is going on.

My pull request for this is located here.

Final Thoughts

Overall, I am not completely satisfied with my 0.5, although I learned a lot, I like creating more then learning. If that makes sense, and very little creating and actual real-world examples I have to show. And keeping with my favorite fandoms,

It’s not who you are underneath, it’s what you do that defines you.

~ Batman/Bruce Wayne

It’s great that I learned, I’m glad I did, but that’s good for me, and not the community around me.

by James Inkster at Fri Jan 24 2020 21:51:41 GMT+0000 (Coordinated Universal Time)

Thursday, January 23, 2020


Krystyna Lopez

Release 0.5 React Experimental continuation

In this post I will continue to describe what I have done so far to complete my issue. This issue try to conquer problem of integrating React Experimental channel into the Gutenberg project.
Steps that I have to do:
1. First I read through all the documentation about Experimental channel and what the ways to implemented. Experimental channel can only be implement on the projects that use React as external library. The reason is because if the project uses React for GUI and there is any bugs or features that can fail it can botched the front end. Gutenberg uses React as external library so that is why they try to see how Experimental will work on Gutenberg and on the other side to test new features.
2. Currently Gutenberg uses React 16.9.0. So after reading through the all documentation I updated version of the React on Gutenberg. Because Gutenberg uses React as an external library there are some changes to be done in the PHP.
So here is my files with changes:





After this changes I create a pull request but unfortunately I was not able to pass Travis test.
3. On my third step I'm trying to solve the problem that causing my code to fail Travis. After investigating into this problem I was able to narrow the issue down. My code failing on
"test-unit": "wp-scripts test-unit-js --config test/unit/jest.config.js".
Once Experimental will be integrated into the Gutenberg new features from React will be unlocked to test. As for today React experimental channel release Concurrent Mode. This feature will allowed app to be responsive and gracefully adjust to the user's device capabilities and network speed.
So far this is my last step because with out fixing the code I can not implement any other changes. 

by Krystyna Lopez at Thu Jan 23 2020 16:14:22 GMT+0000 (Coordinated Universal Time)


Rafi Ungar

Telescope: connecting the (c)dots 🔭

  Over the last few weeks, I worked on a myriad number of issues to address my goal of becoming more familiar with multiple areas of Telescope:

https://github.com/Seneca-CDOT/telescope/projects/6
What I’ve been working on
I examined Calvin’s issue and judged no further tests were necessary.
https://github.com/Seneca-CDOT/telescope/pull/553
I have been adding several logging statements to Telescope; here’s what they look like…
https://github.com/Seneca-CDOT/telescope/pull/550
… And here’s how they’re implemented.
My work on the OPML feed packager is in progress; modules have been ‘plugged in’. PR soon!
https://www.moves-tangier.com/
I met with Sofie, the project lead of MOVES, an organization for volunteers working in Morocco, about open-sourcing the development of their to-be web app. Exciting times ahead!
https://layer5.io/blog
My blog posts are poppin’ up on Layer5’s website!

by Rafi Ungar at Thu Jan 23 2020 06:36:45 GMT+0000 (Coordinated Universal Time)


Cindy Le

OSD 700 Release 0.5

For my 0.5 Release, I chose to audit and improve our documentation. The main area of focus was the the documentation for setting up the environment for new contributors.

My first PR was about our project board since we will be using project boards for each release. I added definitions for columns and definitions for labels since it hasn’t been discussed anywhere.

My second PR was the bulk of my work. I audited our setup documentation for Linux and Mac to see if I can actually get Telescope running just using the instructions we have now. I wanted to cover Windows too but didn’t have the time so I’ll be finishing up that for my 0.6 Release.

I originally had Telescope running on my Fedora virtual machine and had installed Redis natively. That’s what I’ve been using since last semester. To really audit our setup instructions, I created a new virtual machine running Ubuntu and had Redis running through Docker. Since this was a completely new installation of an operating system, I’ll explain all the steps I went through (I did the same thing for Mac):

  • Download Google Chrome to use as default browser (I prefer this over Firefox because I like Chrome’s developer tools better)
  • Download Visual Studio Code (this is my go-to code editor)
  • git
  • node
  • Docker for Linux (there were different methods to install Docker but I took the long way of downloading the following three individually):
    • Docker CE
    • Docker CLI
    • container.io
  • Docker Desktop for Mac

I’ve never used Docker until I audited the docs and wow, Docker really made a lot easier to get Redis working, especially on Mac.

There are still a number of issues related to my Environment Setup PR that I need to follow up on:

  • This issue addresses the steps that need to be explained better and the unclear instructions about the .env. I didn’t want to change anything about this because there’s currently a few other issues that will be changing the example.env file and I want them to be resolved before I make any changes.
  • And the obvious, I still need to audit the Windows setup instructions (I already have a box of tissue prepared for this)

For 0.6, I’ve assigned myself Automatic PR deployments for frontend which I’m excited to do because this will be extremely useful. Sometimes I feel like I shouldn’t get so excited about stuff because idk maybe I’m cursed or something, whatever I’m excited for gets ruined somehow. Like my React Native app and now the frontend of Telescope… Well, I shouldn’t say it’s ruined, it’s too early to say that but I would’ve preferred a more thought out and planned approach. I don’t even know what to do with the current PR that’s up right now for the initial front end, it didn’t pass our Travis and CircleCI tests when it was first up and it has fallen so behind that it’s actually has more errors than it did initially. Not to mention it has so many bugs and doesn’t even look that nice……………………………………….. (it looks like the old site but monochrome blue). If it were up to me, I would’ve scrapped it. I really wanted to see what the Frontend Lead had in store because I snooped around on LinkedIn and saw that she’s a Graphic Designer. I admit I had high expectations after seeing that and to see that her ideas were not brought to light was a disappointment to me. Anyway, that’s just my opinion, at the end of the day, I still want to see some kind of frontend so I can do my automatic deployment thing.

by Cindy Le at Thu Jan 23 2020 03:20:09 GMT+0000 (Coordinated Universal Time)

Tuesday, January 21, 2020


Timofei @fosteman Shchepkin

A tell about my Motivation

Whence reviewing my knowledge base for upcoming interviews this winter, I gained a vision over one of my accomplishments I never felt was anyhow important. Appears I was blind to a source of my motivation which drives me each and single day.

One accomplishment of mine, I must admit, may sound rather minuscule and uninspiring. Turns out, to the contrary of this common thinking, it’s one of those inspiring things that direct our lives.

It began March the twelfth, 2016, I turned 16, first time in millennia, and received a gift. It was RepRap 3D printer, which jolted my imagination and fuelled ambitions to things above and beyond printing free models.

I think I am proud for myself, my congruence on which I leaned, whence the graduation time from high school marked the conscious choice for my profession. It’s that congruence that now transcends my life endowing joy and purpose.

Listen to yourself in moments of choice, the right answer is often most obvious one.

by Timofei @fosteman Shchepkin at Tue Jan 21 2020 21:09:30 GMT+0000 (Coordinated Universal Time)

Monday, January 20, 2020


Calvin Ho

Async Await and Promises

As a continuation of my PR for Telescope, I thought I should talk a bit about async/await and the old way of using return new Promise(). Here are a few examples of do's and don'ts:
// Async functions return promises, no need to add await 
// DON"T DO
async function returnsPromise(){
return await promiseFunction();
}

// DO
async function returnsPromiseFixed(){
return promiseFunction();
}

//---------------------------------------------------------------------------

// Don't use await when function is not async
// DON"T DO
function noAsync(){
let promise = await promiseFunction();
}

// DO
async function noAsyncFixed(){
let promise = await promiseFunction();
}
//---------------------------------------------------------------------------
// Writing errors
async function f() {
await Promise.reject(New Error("Error"));
}

// SAME AS
async function f() {
throw new Error("Error");
}
//---------------------------------------------------------------------------
// Use try catch to wrap only code that can throw// DON"T DO async function tryCatch() { try { const fetchResult = await fetch(); const data = await fetchResult.json(); const t = blah(); } catch (error) { logger.log(error); throw new Error(error); } } // DO async function tryCatchFixed() { try { const fetchResult = await fetch(); const data = await fetchResult.json(); } catch (error) { logger.log(error); throw new Error(error); } } const t = blah(); //--------------------------------------------------------------------------- // Use async/await. Don't use Promises// DON"T DO async function usePromise() { new Promise(function(res, rej) { if (isValidString) { res(analysis); } else { res(textInfo); } if (isValidStrinng === undefined) { rej(textInfo); } }) } // DO async function usePromiseFixed() { const asyResult = await asyFunc() } // -------------------------------------------------------------------------- // Don't use async when it is not needed... Don't be overzealous with async/await// For example the sentiment module we're using is not an async function// DON"T DO module.exports.run = async function(text) { const sentiment = new Sentiment(); return Promise.resolve(sentiment.analyze(text)); }; // DO module.exports.run = function(text) { const sentiment = new Sentiment(); return sentiment.analyze(text); }; // -------------------------------------------------------------------------- // Avoid making things too sequential// DON"T DO async function logInOrder(urls) { for (const url of urls) { const response = await fetch(url); console.log(await response.text()); } } // DO async function logInOrder(urls) { // fetch all the URLs in parallel const textPromises = urls.map(async url => { const response = await fetch(url); return response.text(); }); // log them in sequence for (const textPromise of textPromises) { console.log(await textPromise); } } // --------------------------------------------------------------------------
// Examples
// refactor following function:

function loadJson(url) {
return fetch(url)
.then(response => {
if (response.status == 200) {
return response.json();
} else {
throw new Error(response.status);
}
})
}

// Solution:
function loadJson(url) {
let fetchResult = await fetch(url);
if (fetchResult.status == 200){
let json = await fetchResult.json();
return json;
}

throw new Error(fetchResult.status);
}

// refactor to use try/catch
function demoGithubUser() {
let name = prompt("Enter a name?", "iliakan");

return loadJson(`https://api.github.com/users/${name}`)
.then(user => {
alert(`Full name: ${user.name}.`);
return user;
})
.catch(err => {
if (err instanceof HttpError && err.response.status == 404) {
alert("No such user, please reenter.");
return demoGithubUser();
} else {
throw err;
}
});
}

demoGithubUser();

// Solution:
async function demoGithubUser() {
let user;
while(true){
let name = prompt("Enter a name?", "iliakan");
try {
user = await loadJson(`https://api.github.com/users/${name}`)
break;
} catch (err) {
if (err) {
alert("No such user, please reenter.");
return demoGithubUser();
} else {
throw err;
}
}
}
}

// Call async from non-async
async function wait() {
await new Promise(resolve => setTimeout(resolve, 1000));

return 10;
}

function f() {
// ...what to write here?
// we need to call async wait() and wait to get 10
// remember, we can't use "await"
}

// Solution:
function f() {
wait().then(result => alert(result));
}

by Calvin Ho at Mon Jan 20 2020 05:15:37 GMT+0000 (Coordinated Universal Time)

Friday, January 17, 2020


Cindy Le

Hello again…

This is gonna be a long one because I didn’t have time to blog last week. Today marks the end of Week 2 of Winter 2020 so I wanna blog where I’m at with each of my courses.

PMC 115 – IT Project Mgmt Fundamentals

This is my third attempt at getting into this course and I finally succeeded. I’ve tried previous semesters and was blocked because they kept telling me that PMC 115 is only available for the BSD and the Project Management students which didn’t make sense to me since it’s constantly offered as a CPA professional option. This is probably the least interesting course I have this semester because it’s literally a Microsoft Project tutorial class with heavy lectures on Project Management concepts.

GAM 536 – Game Content Creation

I didn’t intend to take this course because I don’t see myself becoming a game developer but I’m actually pretty happy I took this class. We’re not dealing with any coding languages, instead we’re making an amusement park in Adobe MAX 2020 and Photoshop. It feels like an art class for programmers. Last week, we learned some basic concepts and we made a bouncy castle and an airplane. This week, we learned about triangle reduction which is removing faces on objects we don’t see to reduce the resources being used to render those objects.

PRJ 666 – Project Implementation

Where do I even begin with this one?… This really goes all the way back to last semester in PRJ 566 where project ideas were proposed, students were put into groups and in those groups we would plan how we would implement and build the project in the following semester. I’m not sure how much I’m allowed to reveal about this specific project but when it was proposed, it was suppose to be a web app or at least use web based technologies (React). I signed up because I liked the idea and I was comfortable using web technologies. I really thought I got super lucky because not only was the idea great but my group members were awesome as well. This has probably been my favourite group to work with in all my years in college. We breezed through the course and come near the end, my group leader informs us that she will be going on coop for 8 months and will not be taking PRJ 666 with us next semester. We discussed about what technologies we would use to build the application when we come back and someone suggested we use Java. I shot that down and expressed that I was extremely uncomfortable with programming in Java and OOP in general. I had to retake OOP 244 and OOP 345 and when I took JAC 444, I felt like I didn’t learn anything, I submitted in all my labs but they were done poorly. If I were to grade myself in that class, I would’ve failed myself. Since this application was meant to be used on mobile and it needed to utilize the camera and geolocation features of phones, we agreed that we shouldn’t build a web app since web apps tend to have trouble accessing native mobile features on phones. In conclusion, it was no to web app and no to Java which left us with hybrid app, one group member was googling on the spot and read to us about hybrid apps. We agreed to build a hybrid app using React Native. I researched more about React Native when I got home and a “hybrid app” is not what we’re getting when we use React Native. React Native code is compiled into native code. In short, we would have one code base and it’s able to produce an Android app and an iOS app. I was pretty excited to learn how useful React Native is to not have to get two different programmers to program for Android and iOS separately but you can just get one React Native programmer and you’d get two apps.

Anyways, I spent 2-3 weeks in between the semesters learning React Native thinking that’s what we were gonna use to build the app. When we came back, we picked up a new group member to replace the one we lost and we allocated tasks equally among 4 members. The following week, the new guy tells us he’s switching to BSD and won’t be in our group anymore. That wasn’t even the worst news. Yesterday in our group meeting, the guy that suggested we use Java last semester asked “hey, so have we decided what we’re gonna be using to build our app? We should use Java”. At that point, I didn’t even know why that was even a question, I’ve told the group multiple times that I hated Java and throughout my holiday break, I posted in our Discord my learning React Native progress. Literally at the end of the first week back and I completed the two front end screens I was assigned using React Native and posted it on my timesheet for Week 1. The other two didn’t post anything for their Week 1 timesheets, the Java guy was assigned fetching our API because the data was down on the government website so he hasn’t started coding anything and the other guy is indifferent because he doesn’t know anything about React Native or mobile development using Java and he’ll have to learn something new anyways. I’m pretty frustrated at this point because I don’t think our Java guy will do any work in React Native and I can’t have him block the project from moving forward. We only have 9 weeks left to deliver this application and no one besides me have started coding yet.

I tried my best to understand from his point of view and really have a logically approach to this. We’re all in the same program so we would’ve completed a whole introduction to Java course and about two weeks of React in one of the web development courses. I liked React so I chose to pursue it and dive deeper, the other two most likely didn’t. The Java guy is taking Mobile Android Development this semester so he’s not an expert in this at all. In the end, it’s either I’m extremely uncomfortable or he’s extremely uncomfortable with whatever language we choose to build our app in. I chose to bite the bullet here so we’re gonna build the app using Java. I told them to push back whatever was assigned to me by a week and a half so I can learn Java and Android development so I can implement my tasks properly. This was definitely not what I imagine would happen… ever… 10 days is not enough to learn Java but we’ve already wasted so much time and we need to deliver a working app in 9 weeks. Optimistically speaking, maybe by the end of this all, I’ll be so proud of our app that I’d put it on my resume and be like “See this? I know how to teach myself new skills. You should hire me.”

OSD 700 – Open Source Project

I chose to do documentation here, I might touch upon the front end but I think there are a lot of people doing that already. I wanted to do documentation because I’m still lost and I’m trying to wrap my head around the project again. There are areas I want to familiarize myself with to really help newcomers join and contribute to our project. I finally got around to learning what Docker is and it makes me want to get our project working on Windows, Mac and Linux. This will take me back all the way to the beginning where I would get the environments setup for each operating system as if I was actually new. I think this is the best approach and it makes sense to me. I previously had our project run locally on a Linux VM on my Windows laptop and I remember running into a lot of trouble getting Redis to cooperate (this was before we were using Docker). And now that we are using Docker, the initial setup shouldn’t make me wanna throw my laptop and desk out the window. But yeah, I still have a lot to learn: more Docker, Kubernetes, Redis, Gatsby, React… I also wanna see what’s up with Travis and CircleCI. I know I won’t be an expert at the end of this but I’ll at least be familiar with all the tools we’re using and what they do for our project.

by Cindy Le at Fri Jan 17 2020 18:18:57 GMT+0000 (Coordinated Universal Time)

Tuesday, January 14, 2020


Timofei @fosteman Shchepkin

Idimatic Swift

Idiomatic Swift

As a wretched perfectionist, I am tantalized to seek the correct way to do certain things. The standard library provides some clues, sure, but even that changed over time in C++, likewise in Swift (since 2.0 beta, yes, I am old enough to know the dropped conventions and adopted others). It was nearly 5 years ago since introduction of the language, and now, I am all in.
Congratulate me, I require social approval to keep on going.

As a person coming from different language,
Swift resembles everything likeable from C++, low-level twiddling,
yet rooted out undefined behaviours (think null pointer exceptions).
The lightweight trailing closure syntax of map or filter are similar to that of Ruby.
Generics are similar C++ templates with additional type constraints to ensure generic functions integrity at the time of definition rather than runtime.
Flexible higher order functions and operator overloading means I can write code that's similar to JavaScript!
And the @objc and dynamic keywords allow me to use selectors and runtime dynamism in ways I could in Objective-C, Wonderful.

Given all that familiarity, I thought I could adopt all the same mental models from knowledge I have. Well, I am, indeed, capable of doing so, for conversion of Objective-C into Swift is 1 click away. Case point: familiar Object-Oriented Design patterns apply.

I started coding… And hit a wall. Several times. I can’t use protocol extensions with associated types like interfaces in Java (Arrays are not covariant). functor isn't here either. Thought, sure enough, there are ways of doing everything in a different manner.

Swift is a programming language unlike anyone else, and this is promising, for it bring together the best practices, or so it says. (At least I need not call C libraries to write a collection type).

Thus, I am into learning this language.

To write succinct, elegant code, get things in Apple application development done.

This series of articles is about my journey.

I promise to answer as many “Why does Swift behave like that?” as I may muster.

Each article will cover fundamental concepts like optionals and strings one by one.

by Timofei @fosteman Shchepkin at Tue Jan 14 2020 19:04:00 GMT+0000 (Coordinated Universal Time)

Monday, January 13, 2020


James Inkster

Of Cors it is.

Another year, another blog. This time it’s going to be reflective of the community at large. And something that I noticed that never pops-up on my feed, nor in my searches.
What could this mysterious thing be?

Where’s the tutorial!

As someone learning programming, constantly I refer to google, to see what other developers have made, take bits from there code, do the tutorials, and then see what I can piece together. Each tutorial takes about 15-20 minutes on average I would say, some take longer, some take shorter, and are usually quite good on helping you build whatever you need too.

This isn’t my issue, the Tutorials are great. What I’m more surprised at, is how many times I spend looking for things, and I don’t get told “I can’t do this that way.”

What do I mean?

I spent hours, trying to find my way around CORS for an Angular Application. I did not want to install a back end for something I just wanted to display information that was being generated by an RSS Feed. (Hint: It’s this feed.)

I thought for sure there would have been a relatively simple way to do this, yet alas, it took me about 10 minutes to set-up an express server and create a route that would provide the feed I needed.

Consultation

I consulted many people during this time, until one gave me fairly straight forward advice, which is why I used for express. The problem is – why isn’t this question easily searchable? This is a very common problem in programming world dealing with Cors, and there are solutions on how to fix the issue, and it’s clearly a common re-occuring problem, as I’ve seen many other new developers run into this same issue, as a community we do an amazing job showing everyone what we can do, but very few things showing what we can’t do. These lists would be super helpful for things that people want to over-come as well, or accomplish. Imagine you could see real issues people struggle with on a day-to-day basis, that you know you and your friends all dealt with, and were kind of like “Why? Why is this an issue?”

It would bring good awareness, and the Open Source community at large is very helpful, simply put these kind of questions do not get answered on stack overflow, github, but usually from an individual who went through the same growing pains.

Am I just complaining?

I don’t think so, but I could definitely understand if someone reading this felt that way, we live in a time where I get more help than I ever could at the tip of my finger tips. My growing pains are usually trying to find out what I should be reading, not where I should be reading. (Very different 30 years ago.) So I do want to preface, I think the community is very healthy and happy, and this is something I just think needs the smallest bit of contribution from the community from, and if it’s not made within 10 years, I will probably make it myself as I’ll be more experienced, although I’m sure there are tons of individuals with a wealth of knowledge that succeeds mine.

by James Inkster at Mon Jan 13 2020 15:09:25 GMT+0000 (Coordinated Universal Time)

Sunday, January 12, 2020


Krystyna Lopez

Release 0.5. Integrating React Next into Gutenberg Project

This is the first part of release 0.5 and in this post I will talk about steps that I will take in order to solve issue #18216 in Gutenberg Project

What is issue about:
Find a way to integrate React Prerelease channels into WordPress/Gutenberg. React Prerelease channels allows Projects that are not using React as a user-facing application test features that are might will be available in the next releases. By integrating React Prerelease channels into Project community will help in improving React while testing out new features.

What needs to be done in Gutenberg:

1. Install Next or Experimental channels (as I'm waiting for respond which channel I have to integrate Next or Experimental I will talk for now only about Next channel) inside of the project 
In package.json 
change version of react and react-dom to  "next"
so package.json will look this way:

2. Set up CircleCI or Travis CI in order to run occasionally tests.

- Set up cron jobs that are supported by CircleCi or Travis CI
- In the cron job update React to "next"



- Run tests against updated packages.

Note. Command `yarn upgrade ` used here, while for npm cli command is `npm update react@next react-dom@next`

3. Gutenberg use React as external library, and according to documentation I have to tweak the PHP code in the webpack configuration to enqueue it using WordPress API

by Krystyna Lopez at Sun Jan 12 2020 06:03:11 GMT+0000 (Coordinated Universal Time)

Saturday, January 11, 2020


Calvin Ho

Telescope Issue-525



The story of my PR for issue-525.
I can't believe I'm making gifs at 4 in the morning.

by Calvin Ho at Sat Jan 11 2020 09:02:20 GMT+0000 (Coordinated Universal Time)

Thursday, January 9, 2020


Calvin Ho

Kubernetes Pt3


*Blessed*



Thank you @manekenpix, I still have no idea how to fix all the problems we came across, but let us just enjoy this for now.

by Calvin Ho at Thu Jan 09 2020 07:32:36 GMT+0000 (Coordinated Universal Time)

Kubernetes

Containers became all the rage nowadays, I have 0 experience with either Docker or Kubernetes. This post serves to kind of explain some concepts Kubernetes for myself.

Pods - can be made up of one or more containers. Pods can also be replicated horizontally to allow scaling of an app.

Deployments are used to manage pods, to deploy a pod we use the following line
kubectl create deployment (appName) --image=(imageName)

kubectl get deployments - will display all current deployments

kubectl get pods - will display all pods 

kubectl get events - will display all the things that have happened, such as new pods

Although we have created a deployment for our pod, it is only accessible within the Kubernetes cluster. A Service is enables access to the deployed App, to create a Service we have to use the following command

kubectl expose deployment (appName) --name=(serviceName) --type=LoadBalancer --port=(portNumber)

*if the --name=(serviceName) flag is not provided, the service will default to the appName
*--type= can be any of the below:
LoadBalancer - if the cloud provider Kubernetes is running on provides load balancing.
ClusterIP - can only reach the service only from within the cluster
NodePort - creates a ClusterIP and NodePort service will route to it. Allows access from outside the cluster by using NodeIP:NodePort
ExternalName - maps the service to the contents of the externalName field

We can verify the Service has been created by using the following command:
kubectl get services , this will display all the Pods exposed 

minikube service (serviceName) , will launch the service within the pod.


Technically the steps we need to follow to deploy an app on Kubernetes
1. Create a deployment to manage (kubectl create deployment (appName) --image=(imageName))
2. Expose the deployment kubectl (kubectl expose deployment (appName) --name=(serviceName) --type=LoadBalancer --port=(portNumber))
3. Run the service (kubectl service (serviceName))

To replicate the pods we use the following command
kubectl scale deploy (appName) --replicates=(replicateNumber)

On a side note, this also lets us manage some deployments on the fly. Say our current image is not compatible with other images, we can change the version by using the following command.
kubectl set image deployment (appName)=(imageName)

Kubernetes tracks histories of all changes made to the deployment, such as when changing the image for a deployment. They can be viewed with the following command
kubectl rollout history deploy (appName)

When changes are made to the image, Kubernetes will automatically scale down replica sets of the deployment with the old image and automatically spin up the same number of replicas for deployment with the newer one. We can verify this by using
kubectl get rs -l app=(appName)

To rollback changes made to a deployment we use the following command. The revisionNumber can be any of the ones listed when running the command kubectl rollout history deploy (appName)
kubectl rollout undo deployment (appName) --to-revision=(revisionNumber)

When rolling back changes, a new revision will be made and it will also remove the revision number of the one we rolled back to. For example I initially deployed with an image of version 1.15 and changed the image to version 1.16. There should be a total of 2 revisions: 
  • 1 (my initial image of version 1.15)
  • 2 (my current image of version 1.16)
If I roll back to revision 1 with the above command, a new revision will be added to the table 3, and revision 1 will be removed. My history will now look like the following:
  • 2 (image of version 1.16)
  • 3 (image of version 1.15, I rolled back to)
Kubernetes tracks up to 10 revisions for your rollback pleasure.

To delete the deployment use the following command
kubectl delete deployments (appName)

by Calvin Ho at Thu Jan 09 2020 07:26:53 GMT+0000 (Coordinated Universal Time)


Sukhbeer Singh Dhillon

Final release

Experience of being in a community that never sleeps

At this point, I am sure reader of any of the blogs related to the Telescope project would know how amazing everyone is feeling after having contributed. The truth be told, I was exuberant too at the start. It was nice to have so much to learn from the various technologies being used. And I could have chosen anything to work on. I didn’t have to know everything about the project to work on small parts of it.

And I did contribute on it. I was constantly on the watch and helping in reviewing different PRs. I was supplying any knowledge I had to any issues. One of my friends was struggling to even compile the telescope project and see the result for themselves. Why? Because they had a windows laptop. Now I have a windows laptop too, but I rarely boot windows. I have become so much comfortable with linux command line.

So anyways, this was bizarre given that we had recently added prettier which was supposed to fix all ‘f them problems. I sent the following pull request to help our windows developers.

Fix #323: Disable linebreak style and add gitattributes by sukhbeersingh · Pull Request #324 · Seneca-CDOT/telescope

This added a .gitattributesfile which would ensure that all line endings would be normalized. Here is what it means from https://git-scm.com/docs/gitattributes#_code_text_code

This attribute enables and controls end-of-line normalization. When a text file is normalized, its line endings are converted to LF in the repository.

I also added the eol attribute which does the following

This attribute sets a specific line-ending style to be used in the working directory. It enables end-of-line conversion without any content checks, effectively setting the text attribute.

More recently I also submitted a minor fix for updating and standardizing all error logging. Here is the link to that PR.

I would have loved to contribute more to this project, but the reality is that I had three other full time courses and bills to pay, along with the international tuition for Seneca. Hence I didn’t always have the time to dedicate to this project. Plus, this project was so fast paces because of sixty people working on it. In terms of incentive to contribute, there were only two- grades; potential knowledge for future workplace.

For the second incentive, I am reminded of the theory of Brain Capacity quoted by Sherlock Holmes in “A Study in Scarlet”,

I consider that a man’s brain originally is like a little empty attic, and you have to stock it with such furniture as you choose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things, so that he has difficulty laying his hands upon it. Now the skillful workman is very careful indeed as to what he takes into his brain-attic. He will have nothing but the tools which may help him in doing his work, but of these he has a large assortment, and all in the most perfect order. It is a mistake to think that that little room has elastic walls and can distend to any extent. Depend upon it there comes a time when for every addition of knowledge you forget something that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones.

Reference:

Doyle, Arthur Conan. A Study in Scarlet: https://www.gutenberg.org/files/244/244-h/244-h.htm


Final release was originally published in sukhbeerdhillon on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Sukhbeer Singh Dhillon at Thu Jan 09 2020 07:11:53 GMT+0000 (Coordinated Universal Time)

Stage III: Finishing up

In the last part of my project series for my software optimization course, I made a minor change that would allow xz to call the 8-byte memcmplen method with unaligned access for Aarch64 platforms. The results of that didn’t confirm make my compression faster. In this post, I want to perform profiling for this changed version and compare it against the existing profile I got in the first part of this series. I will also talk about unaligned access addition in ARM processors.

I wanted to confirm if my directive was even working, so I tried to produce the preprocessed code for memcmplen.h . I had the following error:

sysdefs.h: No such file or directory

This header file was in a different directory so I had to include it while calling the preprocessor by using option -iquoteand specifying the folder that has the header file. Here’s what worked

cpp memcmplen.h  -iquote ~/project/git/xz/src/common

Turns out that the code I wanted to be executed was not even selected by the preprocessor directives I put. I tried two things: Use flag --enable-unaligned-acces while running the configure script, so that TUKLIB_FAST_UNALIGNED_ACCESS would be set to 1. The other thing I tried was modifying the code so that unaligned access would anyhow be executed.

To my surprise, the result time didn’t change in either case.

/*Modify source code to have unaligned access*/
real 35m36.629s
user 35m11.003s
sys 0m12.343s
/*Include --enable-unaligned-access flag while configuring*/ 
real 35m38.890s
user 35m14.585s
sys 0m11.185s
Flat profile from gprof for first build
Visual call graph for second build

These results demonstrate that for this software, unaligned access of memory while comparing length of two buffers wouldn’t improve efficiency. However, as we noticed in my first blog, there is a huge significant difference between compression time on x86_64 and Aarch64. Maybe it is some other piece of code that we have not looked in this scope. From the call graphs and earlier, profiling results, one can also see that memcmplen is not in the list of hotspot functions.

I do not have anything to give to this project’s upstream. This project is not dead, if you check their git log, they may have slowed down but are still active. This project gave me an opportunity to look at unaligned access of memory for processors. I had to read up a lot to understand what that even means. I first heard about this in my Parallel Processing for GPU class, but I didn’t quite get it.

This course has been very helpful. Even though I may not have given it my best, I learned a lot through it. I have become so much more comfortable in using the command line. I rarely boot into Windows anymore on my laptop. Chris Tyler’s methodology of projecting his way of working on the systems we were supposed to interact with and overcoming errors if any taught me more than any lecture. Learning about sys admin commands, assembly intro and SIMD material broadens my knowledge of computer architecture overall.

Below are some references if you’d like to read up more on unaligned access on ARM and its progress now.

https://www.kernel.org/doc/Documentation/unaligned-memory-access.txt

https://medium.com/@iLevex/the-curious-case-of-unaligned-access-on-arm-5dd0ebe24965

http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka15414.html

https://stackoverflow.com/questions/32062894/take-advantage-of-arm-unaligned-memory-access-while-writing-clean-c-code

https://fgiesen.wordpress.com/2013/10/18/bit-scanning-equivalencies/


Stage III: Finishing up was originally published in sukhbeerdhillon on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Sukhbeer Singh Dhillon at Thu Jan 09 2020 07:10:35 GMT+0000 (Coordinated Universal Time)

Wednesday, January 8, 2020


Rafi Ungar

Telescope: post-solstice planning 🔭

During the past few months, I wrote Jest tests for (and re-implemented) Telescope’s preliminary inactive blog filtering functionality (which, at that point, used file-based storage. Currently, this functionality is in the process of being upgraded to utilize Redis by my peer Calvin).

As such, so far, my efforts have been concentrated within a single one of Telescope’s several ‘components’ (namely the “pipeline” that handles/validates parsed blog posts). As a result, I have yet to develop a strong “bird’s eye view” of Telescope as a whole.

Over the course of the next few months, a personal goal of mine is improve my “bird’s eye view” of Telescope by expanding the scope of my contributions to include more of Telescope’s ‘components’.

I plan to achieve this personal goal of mine (as well as usefully aid Telescope’s progress towards production) by tackling deemed-important issues that not only the involve the implementation of individual components, but with how each component connects to each another (component ‘connections’).

Specifically, over the course of the next few months, I plan to tackle this goal in two parts:

Part 1: Aid the triage of component connections, e.g. via Labels: “connection: missing”, “connection: unfinished”, or “connection: improveable”.

Part 2: Aid work on “unfinished” (and, subsequently, “improveable”) component connections.


To start, over the next two weeks, I plan to take the first steps needs to complete the first part of my plan:

Step 1a: Compile a list of components and connections shown in Telescope’s ‘chalkboard diagram’ (as well as those identified during meetings, etc.)

Step 1b: Compile a list of every file within Telescope’s codebase that (should) house each listed connection.

Step 1c: Begin triage by determining which listed files are blatantly devoid of an expected connection (e.g. missing expected import statements, etc.) as “connection: missing” (e.g. by opening a new Issue, or labelling relevant existing Issues).


In the two weeks following the completion of these first three steps, I plan to take on the remaining steps of the first part of my plan:

Step 1d: Analyze the remaining listed files (i.e. those that are not blatantly ‘disconnected’) to determine which contain connections that are all working as expected, and triaging these files as “connection: working”. Working connections that appear in need of nice-to-have improvements should be instead triaged as “connection: improveable”. The remaining files can therefore be triaged as “connection: unfinished”.

Step 1e: For each file triaged as “connection: unfinished” (e.g. unused import statements, incomplete logic, etc.), file an Issue that describes (i) what appears to have been done to begin implementing this connection and (ii) what appears to still need doing. (If some non-obvious bug is preventing the success of this connection, this Issue should also be labelled as “type: bug”.)


In the weeks following the completion of the first part of my plan, I will tackle the second part. Specifically, I will help resolve the Issues that I opened during Step 1e.

Throughout each step of this plan, I hope to continually count on the advice and aid of my fellow Telescope contributors—my instructor and peers—who I know to each possess a keen “bird’s eye view” of Telescope.


(As an aside, I am also particularly interested in helping brainstorm ways to make better use of GitHub’s project management tools during the weekly Telescope Triage Meetings the Telescope team is planning to hold.)

by Rafi Ungar at Wed Jan 08 2020 12:00:00 GMT+0000 (Coordinated Universal Time)

Tuesday, January 7, 2020


Calvin Ho

Kubernetes Pt2

In the previous post we used kubectl command lines to deploy. However we can create .yaml configuration files and have kubectl create them that way also

the .yaml file will have the following

apiVersion: (name)
kind: Deployment
metadata:
  name: (appName)
  labels:
    app: (imageTag)
spec:
  replicas: (replicaNumber)
  selector:
    matchLabels:
      app: (imageTag)
  template:
    metadata:
      labels:
        app: (imageTag)
    spec:
      containers:
      - name: (imageTag)
        image: (dockerImage)
        ports:
        - containerPort: (portNumber)

Then enter the following command:
kubectl create -f (.yaml file)

I pulled an example from the edx Kubernetes course using the nginx image to deploy a webserver

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webserver
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80

This will deploy an app named webserver replicated across three pods.

We can also define a appName-svc-.yaml file to expose our service with the following content:

apiVersion: (get this value from running kubectl api-version)
kind: Service
metadata:
  name: web-service
  labels:
    run: web-service
spec:
  type: (serviceType)
  externalName: (externalLink) *Use this field if serviceType is set to ExternalName
  ports:
  -  port: (portNumber)
     protocol: TCP
  selector:
    app: (imageTag)

Then enter the following command:
kubectl create -f (appName)-svc.yaml

serviceType can be any of the below:
  1. LoadBalancer - if the cloud provider Kubernetes is running on provides load balancing.
  2. ClusterIP - can only reach the service only from within the cluster
  3. NodePort - creates a ClusterIP and NodePort service will route to it. Allows access from outside the cluster by using NodeIP:NodePort
  4. ExternalName - maps the service to the contents of the externalName field
Also pulled from the edx Kubernetes course:

apiVersion: v1
kind: Service
metadata:
  name: web-service
  labels:
    run: web-service
spec:
  type: NodePort
  ports:
  -  port: 80
     protocol: TCP
  selector:
    app: nginx

by Calvin Ho at Tue Jan 07 2020 08:16:01 GMT+0000 (Coordinated Universal Time)

Friday, January 3, 2020


Corey James

Reversing Words Order in C++ and Using CMake

Saw this challenge in a job posting, thought I would give it a try. I also set up CMake for the first time to build this project. It worked great. I was able to create the program on my windows machine using the command line as I wanted. I think I like Linux Make a …

Continue reading "Reversing Words Order in C++ and Using CMake"

by Corey James at Fri Jan 03 2020 04:15:33 GMT+0000 (Coordinated Universal Time)


Josue Quilon Barrios

KDE Plasma & ssh keys

If you're a Linux user, and the desktop environment of your choice is Gnome, you're probably used to letting Gnome Keyring SSH Agent handle your ssh keys. You just log in, your ssh keys stored in your ~/.ssh folder get loaded in memory, and then you can use them not only in terminals but with any process that requires ssh authorization.

Unfortunately, KDE Plasma doesn't have that feature out of the box, so it needs a bit of tweaking to get the same behaviour.

Let's make some changes to Kwallet and add some scripts to start our ssh-agent and load our keys:


Kwallet
Launch KDE Wallet Configuration and make sure the KDE wallet subsystem is enabled.
Launch Kwallet Manager and create a new wallet if necessary and set a passphrase for it.


Scripts
Now we need to create some scripts to start the ssh-agent on startup, add all the keys, and stop it on shutdown. For this, it's necessary to have the package ksshaskpass installed.

KDE has a designated folder for scripts that will be executed at login but before launching Plasma.

Folder: ~/.config/plasma-workspace/env

In this folder, we need to create a script to start the ssh-agent. Let's call it ssh-agent-startup.sh.

#!/bin/bash

[ -n "$SSH_AGENT_PID" ] || eval "$(ssh-agent -s)"


Also, KDE uses another folder for scripts at login.

Folder: ~/.config/autostart-scripts

Let's add a script to load all our ssh keys. We'll call our script ssh-add.sh.

#!/bin/bash

export SSH_ASKPASS=/usr/bin/ksshaskpass

ssh-add $HOME/.ssh/my_ssh_key1 $HOME/.ssh/my_ssh_key2 $HOME/.ssh/my_ssh_key3...


And finally, let's add a script to stop our ssh-agent at shutdown.

Folder: ~/config/plasma-workspace/shutdown

Our script will be ssh-agent-shutdown.sh.

#!/bin/bash

[ -z "$SSH_AGENT_PID" ] || eval "$(ssh-agent -k)"


Don't forget to mark the scripts as executables:

chmod +x file/to/mark/as/executable


And that's it. After rebooting, the system will prompt you to enter your keys' passphrases, and if everything went well, you should be able to use your keys with any process that needs ssh authorization.

by Josue Quilon Barrios at Fri Jan 03 2020 01:34:00 GMT+0000 (Coordinated Universal Time)

Wednesday, January 1, 2020


Cindy Le

"So I hear you're a developer…"

It’s the holidays and I almost got away from coding… almost… I haven’t been active on GitHub thanks to all these social gatherings this season, but I managed to picked up a side gig to make a website for a client. He’s in the towing industry and he wants a website where potential customers can request a quote by filling out a form. The completed form would then be emailed to him and the customer.

Luckily for me, I already had an idea as to how I was gonna approach this. In my Open Source Development course, our entire class worked on the Telescope project and one of the pull requests I reviewed involved nodemailer . I remember getting frustrated because I couldn’t send an email to myself. After the first day of trying to review something I thought was gonna be easy and having it take far longer than I anticipated, I remember thinking “omg what is this crap? Do we even need this? Why did I think it was a good idea to review this? I’m not even getting marks”. I would’ve given up but the person who made the pull request was really responsive and walked me through on how to test his PR. I think it took like a week of going back and forth to really understand how nodemailer worked.

Request a quote form

I’ve completed most of the styling of the website, and I’m working on the functionality of everything so users would have a nice experience when they’re on the site. My main problem now is that nodemailer works in testing but not in production. I’m trying to figure out how to stop Google from blocking apps trying to sign in.

So that’s my update. I’ve been kinda putting my React Native stuff to the side so I can finish the site but I’ll definitely get back to it. I have a cool idea for this tow business that involves React Native so I’m pretty excited to pitch my idea.

by Cindy Le at Wed Jan 01 2020 09:04:11 GMT+0000 (Coordinated Universal Time)

Saturday, December 21, 2019


Calvin Ho

So I Added New Linting Rules

An issue was filed for Telescope to address consistency and proper use of async/await + promises and fix them. I added new linting rules. Ran an npm test:


Oh my god.

by Calvin Ho at Sat Dec 21 2019 05:00:30 GMT+0000 (Coordinated Universal Time)

Friday, December 20, 2019


Cindy Le

Addicted to the Green Squares

My GitHub overview

I’m blogging from my phone today so hopefully the formatting isnt too bad…

I like looking at the stats on my GitHub page because it has these cool visuals and I’ve kinda made it a mini game for me to push at least one commit a day so I can get my green square. I haven’t been contributing to other open source projects lately since I’ve been really into learning React Native. I originally thought it was gonna take me about a week to learn it but nope, there’s actually a lot more than I expected. Not to mention I’m not even using React Native CLI, I’m using expo which is like React Native on training wheels.

Every semester, I find myself interested in different careers programming can take me. I want from loving Data Science > Systems Administration > Business Analysis > Web Development (mostly front end)… that’s a lot of different routes I can take. Right now, I’m loving open source development and mobile app development. I even had a phase where I was really into fixing computers/phones/tablets and started going through a Comp TIA A+ course on Lynda.com then I tried fixing a tablet screen and I dont know what happened… I put a brand new screen on but the CPU got too hot after?? I wont get into the details but that was that.

Sometimes I wonder if me jumping around different interests too often means I’ll have a hard time holding onto a job long term after I get one… Just kidding! I just enjoy learning different things ;D

I have some pretty cool stats on GitHub right now, you can tell when I started using it more regularly (beginning of August, I was learning Web Dev stuff) then it really kicked off in October which is when Hacktoberfest was happening and I’ve been pretty active since. Most days, I stay up until 2-3am in the morning because I HAVE to code, I think about my projects during the day and have all these great ideas I want to implement but I don’t have time to sit at a computer to code because I have a 3 year old and she doesn’t go to school yet so………. she gets to hangout with me all day every day… it’s hard to keep up with a 3 year old, they have so much energy. I was ready to throw her into school like last year XD

by Cindy Le at Fri Dec 20 2019 18:29:14 GMT+0000 (Coordinated Universal Time)