Planet CDOT (Telescope)

Thursday, August 13, 2020


Raspberry Pi Blog

Mini Raspberry Pi Boston Dynamics–inspired robot

This is a ‘Spot Micro’ walking quadruped robot running on Raspberry Pi 3B. By building this project, redditor /thetrueonion (aka Mike) wanted to teach themself robotic software development in C++ and Python, get the robot walking, and master velocity and directional control.

Mike was inspired by Spot, one of Boston Dynamics’ robots developed for industry to perform remote operation and autonomous sensing.

What’s it made of?

  • Raspberry Pi 3B
  • Servo control board: PCA9685, controlled via I2C
  • Servos: 12 × PDI-HV5523MG
  • LCD Panel: 16×2 I2C LCD panel
  • Battery: 2s 4000 mAh LiPo, direct connection to power servos
  • UBEC: HKU5 5V/5A ubec, used as 5V voltage regulator to power Raspberry Pi, LCD panel, PCA9685 control board
  • Thingiverse 3D-printed Spot Micro frame

How does it walk?

The mini ‘Spot Micro’ bot rocks a three-axis angle command/body pose control mode via keyboard and can achieve ‘trot gait’ or ‘walk gait’. The former is a four-phase gait with symmetric motion of two legs at a time (like a horse trotting). The latter is an eight-phase gait with one leg swinging at a time and a body shift in between for balance (like humans walking).

Mike breaks down how they got the robot walking, right down to the order the servos need to be connected to the PCA9685 control board, in this extensive walkthrough.

Here’s the code

And yes, this is one of those magical projects with all the code you need stored on GitHub. The software is implemented on a Raspberry Pi 3B running Ubuntu 16.04. It’s composed on C++ and Python nodes in a ROS framework.

  • Pose
  • Strut

What’s next?

Mike isn’t finished yet: they are looking to improve their yellow beast by incorporating a lidar to achieve simple 2D mapping of a room. Also on the list is developing an autonomous motion-planning module to guide the robot to execute a simple task around a sensed 2D environment. And finally, adding a camera or webcam to conduct basic image classification would finesse their creation.

The post Mini Raspberry Pi Boston Dynamics–inspired robot appeared first on Raspberry Pi.

by Raspberry Pi Blog at Thu Aug 13 2020 17:57:05 GMT+0000 (Coordinated Universal Time)

Wednesday, August 12, 2020


Raspberry Pi Blog

Track your punches with Raspberry Pi

‘Track-o-punches’ tracks the number of punches thrown during workouts with Raspberry Pi and a Realsense camera, and it also displays your progress and sets challenges on a touchscreen.

In this video, Cisco shows you how to set up the Realsense camera and a Python virtual environment, and how to install dependencies and OpenCV for Python on your Raspberry Pi.

How it works

A Realsense robotic camera tracks the boxing glove as it enters and leaves the frame. Colour segmentation means the camera can more precisely pick up when Cisco’s white boxing glove is in frame. He walks you through how to threshold images for colour segmentation at this point in the video.

Testing the tracking

All this data is then crunched on Raspberry Pi. Cisco’s code counts the consecutive frames that the segmented object is present; if that number is greater than a threshold, the code sees this as a particular action.

Raspberry Pi 4 being mounted on the Raspberry Pi 7″ Touch Display

Cisco used this data to set punch goals for the user. The Raspberry Pi computer is connected to an official Raspberry Pi 7″ Touch Display in order to display “success” and “fail” messages as well as the countdown clock. Once a goal is reached, the touchscreen tells the boxer that they’ve successfully hit their target. Then the counter resets and a new goal is displayed. You can manipulate the code to set a time limit to reach a punch goal, but setting a countdown timer was the hardest bit to code for Cisco.

Kit list

Jeeeez, it’s hard to get a screen grab of Cisco’s fists of fury

A mobile power source makes it easier to set up a Raspberry Pi wherever you want to work out. Cisco 3D-printed a mount for the Realsense camera and secured it on the ceiling so it could look down on him while he punched.

The post Track your punches with Raspberry Pi appeared first on Raspberry Pi.

by Raspberry Pi Blog at Wed Aug 12 2020 10:47:24 GMT+0000 (Coordinated Universal Time)

Tuesday, August 11, 2020


Mozilla

Changing World, Changing Mozilla

This is a time of change for the internet and for Mozilla. From combatting a lethal virus and battling systemic racism to protecting individual privacy — one thing is clear: an open and accessible internet is essential to the fight.

Mozilla exists so the internet can help the world collectively meet the range of challenges a moment like this presents. Firefox is a part of this. But we know we also need to go beyond the browser to give people new products and technologies that both excite them and represent their interests. Over the last while, it has been clear that Mozilla is not structured properly to create these new things — and to build the better internet we all deserve.

Today we announced a significant restructuring of Mozilla Corporation. This will strengthen our ability to build and invest in products and services that will give people alternatives to conventional Big Tech. Sadly, the changes also include a significant reduction in our workforce by approximately 250 people. These are individuals of exceptional professional and personal caliber who have made outstanding contributions to who we are today. To each of them, I extend my heartfelt thanks and deepest regrets that we have come to this point. This is a humbling recognition of the realities we face, and what is needed to overcome them.

As I shared in the internal message sent to our employees today, our pre-COVID plan for 2020 included a great deal of change already: building a better internet by creating new kinds of value in Firefox; investing in innovation and creating new products; and adjusting our finances to ensure stability over the long term.  Economic conditions resulting from the global pandemic have significantly impacted our revenue. As a result, our pre-COVID plan was no longer workable. Though we’ve been talking openly with our employees about the need for change — including the likelihood of layoffs — since the spring, it was no easier today when these changes became real. I desperately wish there was some other way to set Mozilla up for long term success in building a better internet.

But to go further, we must be organized to be able to think about a different world. To imagine that technology will become embedded in our world even more than it is, and we want that technology to have different characteristics and values than we experience today.

So going forward we will be smaller. We’ll also be organizing ourselves very differently, acting more quickly and nimbly. We’ll experiment more. We’ll adjust more quickly. We’ll join with allies outside of our organization more often and more effectively. We’ll meet people where they are. We’ll become great at expressing and building our core values into products and programs that speak to today’s issues. We’ll join and build with all those who seek openness, decency, empowerment and common good in online life.

I believe this vision of change will make a difference — that it can allow us to become a Mozilla that excites people and shapes the agenda of the internet. I also realize this vision will feel abstract to many. With this in mind, we have mapped out five specific areas to focus on as we roll out this new structure over the coming months:

  1. New focus on product. Mozilla must be a world-class, modern, multi-product internet organization. That means diverse, representative, focused on people outside of our walls, solving problems, building new products, engaging with users and doing the magic of mixing tech with our values. To start, that means products that mitigate harms or address the kinds of the problems that people face today. Over the longer run, our goal is to build new experiences that people love and want, that have better values and better characteristics inside those products.
  2. New mindset. The internet has become the platform. We love the traits of it — the decentralization, its permissionless innovation, the open source underpinnings of it, and the standards part — we love it all. But to enable these changes, we must shift our collective mindset from a place of defending, protecting, sometimes even huddling up and trying to keep a piece of what we love to one that is proactive, curious, and engaged with people out in the world. We will become the modern organization we aim to be — combining product, technology and advocacy — when we are building new things, making changes within ourselves and seeing how the traits of the past can show up in new ways in the future.
  3. New focus on technology. Mozilla is a technical powerhouse of the internet activist movement. And we must stay that way. We must provide leadership, test out products, and draw businesses into areas that aren’t traditional web technology. The internet is the platform now with ubiquitous web technologies built into it, but vast new areas are developing (like Wasmtime and the Bytecode Alliance vision of nanoprocesses). Our vision and abilities should play in those areas too.
  4. New focus on community. Mozilla must continue to be part of something larger than ourselves, part of the group of people looking for a better internet. Our open source volunteers today — as well as the hundreds of thousands of people who donate to and participate in Mozilla Foundation’s advocacy work — are a precious and critical part of this. But we also need to go further and think about community in new ways. We must be increasingly open to joining others on their missions, to contribute to the better internet they’re building.
  5. New focus on economics. Recognizing that the old model where everything was free has consequences, means we must explore a range of different business opportunities and alternate value exchanges. How can we lead towards business models that honor and protect people while creating opportunities for our business to thrive? How can we, or others who want a better internet, or those who feel like a different balance should exist between social and public benefit and private profit offer an alternative? We need to identify those people and join them. We must learn and expand different ways to support ourselves and build a business that isn’t what we see today.

We’re fortunate that Firefox and Mozilla retain a high degree of trust in the world. Trust and a feeling of authenticity feel unusual in tech today. But there is a sense that people want more from us. They want to work with us, to build with us. The changes we are making today are hard. But with these changes we believe we’ll be ready to meet these people — and the challenges and opportunities facing the future of the internet — head on.

The post Changing World, Changing Mozilla appeared first on The Mozilla Blog.

by Mozilla at Tue Aug 11 2020 14:00:14 GMT+0000 (Coordinated Universal Time)


Raspberry Pi Blog

New twist on Raspberry Pi experimental resin 3D printer

Element14’s Clem previously built a giant Raspberry Pi-powered resin-based 3D printer and here, he’s flipped the concept upside down.

The new Raspberry Pi 4 8GB reduces slicing times and makes for a more responsive GUI on this experimental 3D printer. Let’s take a look at what Clem changed and how…

The previous iteration of his build was “huge”, mainly because the only suitable screen Clem had to hand was a big 4K monitor. This new build flips the previous concept upside down by reducing the base size and the amount of resin needed.

Breaking out of the axis

To resize the project effectively, Clem came out of an X,Y axis and into Z, reducing the surface area but still allowing for scaling up, well, upwards! The resized, flipped version of this project also reduces the cost (resin is expensive stuff) and makes the whole thing more portable than a traditional, clunky 3D printer.

Look how slim and portable it is!

How it works

Now for the brains of the thing: nanodlp is free (but not open source) software which Clem ran on a Raspberry Pi 4. Using an 8GB Raspberry Pi will get you faster slicing times, so go big if you can.

A 5V and 12V switch volt power supply sorts out the Nanotec stepper motor. To get the signal from the Raspberry Pi GPIO pins to the stepper driver and to the motor, the pins are configured in nanodlp; Clem has shared his settings if you’d like to copy them (scroll down on this page to find a ‘Resources’ zip file just under the ‘Bill of Materials’ list).

Raspberry Pi working together with the display

For the display, there’s a Midas screen and an official Raspberry Pi 7″ Touchscreen Display, both of which work perfectly with nanodlip.

At 9:15 minutes in to the project video, Clem shows you around Fusion 360 and how he designed, printed, assembled, and tested the build’s engineering.

A bit of Fusion 360

Experimental resin

Now for the fancy, groundbreaking bit: Clem chose very specialised photocentric, high-tensile daylight resin so he can use LEDs with a daylight spectrum. This type of resin also has a lower density, so the liquid does not need to be suspended by surface tension (as in traditional 3D printers), rather it floats because of its own buoyancy. This way, you’ll need less resin to start with, and you’ll waste less too whenever you make a mistake. At 13:30 minutes into the project video, Clem shares the secret of how you achieve an ‘Oversaturated Solution’ in order to get your resin to float.

Now for the science bit…

Materials

It’s not perfect but, if Clem’s happy, we’re happy.

Join the conversation on YouTube if you’ve got an idea that could improve this unique approach to building 3D printers.

The post New twist on Raspberry Pi experimental resin 3D printer appeared first on Raspberry Pi.

by Raspberry Pi Blog at Tue Aug 11 2020 12:48:51 GMT+0000 (Coordinated Universal Time)

Monday, August 10, 2020


Raspberry Pi Blog

Raspberry Pi calls out your custom workout routine

If you don’t want to be tied to a video screen during home workouts, Llum AcostaSamreen Islam, and Alfred Gonzalez shared this great Raspberry Pi–powered alternative on hackster.io: their voice-activated project announces each move of your workout routine and how long you need to do it for.

This LED-lit, compact solution means you don’t need to squeeze yourself in front of a TV or crane to see what your video instructor is doing next. Instead you can be out in the garden or at a local park and complete your own, personalised workout on your own terms.

Kit list:

Raspberry Pi and MATRIX Device

The makers shared these setup guides to get MATRIX working with your Raspberry Pi. Our tiny computer doesn’t have a built-in microphone, so here’s where the two need to work together.

MATRIX, meet Raspberry Pi

Once that’s set up, ensure you enable SSH on your Raspberry Pi.

Click, click. Simple

The three sweet Hackster angels shared a four-step guide to running the software of your own customisable workout routine buddy in their original post. Happy hacking!

1. Install MATRIX Libraries and Rhasspy

Follow the steps below in order for Rhasspy to work on your Raspberry Pi.

2. Creating an intent

Access Rhasspy’s web interface by opening a browser and navigating to http://YOUR_PI_IP_HERE:12101. Then click on the Sentences tab. All intents and sentences are defined here.

By default, there are a few example sentences in the text box. Remove the default intents and add the following:

[Workout]start [my] workout

Once created, click on Save Sentences and wait for Rhasspy to finish training.

Here, Workout is an intent. You can change the wording to anything that works for you as long as you keep [Workout] the same, because this intent name will be used in the code.

3. Catching the intent

Install git on your Raspberry Pi.

sudo apt install git

Download the repository.

git clone https://github.com/matrix-io/rhasspy-workout-timer

Navigate to the folder and install the project dependencies.

cd rhasspy-workout-timernpm install

Run the program.

node index.js

4. Using and customizing the project

To change the workout to your desired routine, head into the project folder and open workout.txt. There, you’ll see:

jumping jacks 12,plank 15, test 14

To make your own workout routine, type an exercise name followed by the number of seconds to do it for. Repeat that for each exercise you want to do, separating each combo using a comma.

Whenever you want to use the Rhasspy Assistant, run the file and say “Start my workout” (or whatever it is you have it set to).

And now you’re all done — happy working out. Make sure to visit the makers’ original post on hackster.io and give it a like.

The post Raspberry Pi calls out your custom workout routine appeared first on Raspberry Pi.

by Raspberry Pi Blog at Mon Aug 10 2020 15:09:04 GMT+0000 (Coordinated Universal Time)

Saturday, August 8, 2020


Raspberry Pi Blog

Create a stop motion film with Digital Making at Home

Join us for Digital Making at Home: this week, young people can do stop motion and time-lapse animation with us! Through Digital Making at Home, we invite kids all over the world to code along with us and our new videos every week.

So get your Raspberry Pi and Camera Module ready! We’re using them to capture life with code this week:

Check out this week’s code-along projects!

And tune in on Wednesday 2pm BST / 9am EDT / 7.30pm IST at rpf.io/home to code along with our live stream session to make a motion-detecting dance game in Scratch!

The post Create a stop motion film with Digital Making at Home appeared first on Raspberry Pi.

by Raspberry Pi Blog at Sat Aug 08 2020 10:10:54 GMT+0000 (Coordinated Universal Time)

Friday, August 7, 2020


Raspberry Pi Blog

Processing raw image files from a Raspberry Pi High Quality Camera

When taking photos, most of us simply like to press the shutter button on our cameras and phones so that a viewable image is produced almost instantaneously, usually encoded in the well-known JPEG format. However, there are some applications where a little more control over the production of that JPEG is desirable. For instance, you may want more or less de-noising, or you may feel that the colours are not being rendered quite right.

This is where raw (sometimes RAW) files come in. A raw image in this context is a direct capture of the pixels output from the image sensor, with no additional processing. Normally this is in a relatively standard format known as a Bayer image, named after Bryce Bayer who pioneered the technique back in 1974 while working for Kodak. The idea is not to let the on-board hardware ISP (Image Signal Processor) turn the raw Bayer image into a viewable picture, but instead to do it offline with an additional piece of software, often referred to as a raw converter.

A Bayer image records only one colour at each pixel location, in the pattern shown

The raw image is sometimes likened to the old photographic negative, and whilst many camera vendors use their own proprietary formats, the most portable form of raw file is the Digital Negative (or DNG) format, defined by Adobe in 2004. The question at hand is how to obtain DNG files from Raspberry Pi, in such a way that we can process them using our favourite raw converters.

Obtaining a raw image from Raspberry Pi

Many readers will be familiar with the raspistill application, which captures JPEG images from the attached camera. raspistill includes the -r option, which appends all the raw image data to the end of the JPEG file. JPEG viewers will still display the file as normal but ignore the (many megabytes of) raw data tacked on the end. Such a “JPEG+RAW” file can be captured using the terminal command:

raspistill -r -o image.jpg

Unfortunately this JPEG+RAW format is merely what comes out of the camera stack and is not supported by any raw converters. So to make use of it we will have to convert it into a DNG file.

PyDNG

This Python utility converts the Raspberry Pi’s native JPEG+RAW files into DNGs. PyDNG can be installed from github.com/schoolpost/PyDNG, where more complete instructions are available. In brief, we need to perform the following steps:

git clone https://github.com/schoolpost/PyDNG
cd PyDNG
pip3 install src/.  # note that PyDNG requires Python3

PyDNG can be used as part of larger Python scripts, or it can be run stand-alone. Continuing the raspistill example from before, we can enter in a terminal window:

python3 examples/utility.py image.jpg

The resulting DNG file can be processed by a variety of raw converters. Some are free (such as RawTherapee or dcraw, though the latter is no longer officially developed or supported), and there are many well-known proprietary options (Adobe Camera Raw or Lightroom, for instance). Perhaps users will post in the comments any that they feel have given them good results.

White balancing and colour matrices

Now, one of the bugbears of processing Raspberry Pi raw files up to this point has been the problem of getting sensible colours. Previously, the images have been rendered with a sickly green cast, simply because no colour balancing is being done and green is normally the most sensitive colour channel. In fact it’s even worse than this, as the RGB values in the raw image merely reflect the sensitivity of the sensor’s photo-sites to different wavelengths, and do not a priori have more than a general correlation with the colours as perceived by our own eyes. This is where we need white balancing and colour matrices.

Correct white balance multipliers are required if neutral parts of the scene are to look, well, neutral.  We can use raspistills guesstimate of them, found in the JPEG+RAW file (or you can measure your own on a neutral part of the scene, like a grey card). Matrices and look-up tables are then required to convert colour from ‘camera’ space to the final colour space of choice, mostly sRGB or Adobe RGB.

My thanks go to forum contributors Jack Hogan for measuring these colour matrices, and to Csaba Nagy for implementing them in the PyDNG tool. The results speak for themselves.

Results

Previous attempts at raw conversion are on the left; the results using the updated PyDNG are on the right.

Images 2 and 3 courtesy of Csaba Nagy; images 4 and 5 courtesy of Jack Hogan

DCP files

For those familiar with DNG files, we include links to DCP (DNG Camera Profile) files (warning: binary format). You can try different ones out in raw converters, and we would encourage users to experiment, to perhaps create their own, and to share their results!

  1. This is a basic colour profile baked into PyDNG, and is the one shown in the results above. It’s sufficiently small that we can view it as a JSON file.
  2. This is an improved (and larger) profile involving look-up tables, and aiming for an overall balanced colour rendition.
  3. This is similar to the previous one, but with some adjustments for skin tones and sky colours.

Note, however, that these files come with a few caveats. Specifically:

  • The calibration is only for a single Raspberry Pi High Quality Camera rather than a known average or “typical” module.
  • The illuminants used for the calibration are merely the ones that we had to hand — the D65 lamp in particular appears to be some way off.
  • The calibration only really works when the colour temperature lies between, or not too far from, the two calibration illuminants, approximately 2900K to 6000K in our case.

So there remains room for improvement. Nevertheless, results across a number of modules have shown these parameters to be a significant step forward.

Acknowledgements

My thanks again to Jack Hogan for performing the colour matrix calibration with DCamProf, and to Csaba Nagy for adding these new features to PyDNG.

Further reading

  1. There are many resources explaining how a raw (Bayer) image is converted into a viewable RGB or YUV image, among them Jack’s blog post.
  2. To understand the role of the colour matrices in a DNG file, please refer to the DNG specification. Chapter 6 in particular describes how they are used.

The post Processing raw image files from a Raspberry Pi High Quality Camera appeared first on Raspberry Pi.

by Raspberry Pi Blog at Fri Aug 07 2020 10:12:48 GMT+0000 (Coordinated Universal Time)

Thursday, August 6, 2020


Mozilla

Virtual Tours of the Museum of the Fossilized Internet

Let’s brainstorm a sustainable future together.

Imagine: We are in the year 2050 and we’re opening the Museum of the Fossilized Internet, which commemorates two decades of a sustainable internet. The exhibition can now be viewed in social VR. Join an online tour and experience what the coal and oil-powered internet of the past was like.

 

Visit the Museum from home

In March 2020, Michelle Thorne and I announced office tours of the Museum of the Fossilized Internet as part of our new Sustainability programme. Then the pandemic hit, and we teamed up with the Mozilla Mixed Reality team to make it more accessible while also demonstrating the capabilities of social VR with Hubs.

We now welcome visitors to explore the museum at home through their browsers.

The museum was created to be a playful source of inspiration and an invitation to imagine more positive, sustainable futures. Here’s a demo tour to help you get started on your visit.

Video Production: Dan Fernie-Harper; Spoke scene: Liv Erickson and Christian Van Meurs; Tour support: Elgin-Skye McLaren.

 

Foresight workshops

But that’s not all. We are also building on the museum with a series of foresight workshops. Once we know what preferable, sustainable alternatives look like, we can start building towards them so that in a few years, this museum is no longer just a thought experiment, but real.

Our first foresight workshop will focus on policy with an emphasis on trustworthy AI. In a pilot, facilitators Michelle Thorne and Fieke Jansen will specifically focus on the strategic opportunity that the European Commission is in the process of defining its AI strategy, climate agenda and COVID-19 recovery plans. Thought together, this workshop will develop options to advance both, trustworthy AI and climate justice.

More foresight workshops should and will follow. We are currently looking at businesses, technology, or the funders community as additional audiences. Updates will be available on the wiki.

You are also invited to join the sustainability team as well as our environmental champions on our Matrix instance to continue brainstorming sustainable futures. More updates on Mozilla’s journey towards sustainability will be shared here on the Mozilla Blog.

The post Virtual Tours of the Museum of the Fossilized Internet appeared first on The Mozilla Blog.

by Mozilla at Thu Aug 06 2020 11:46:57 GMT+0000 (Coordinated Universal Time)


Raspberry Pi Blog

Recreate Time Pilot’s free-scrolling action | Wireframe #41

Fly through the clouds in our re-creation of Konami’s classic 1980s shooter. Mark Vanstone has the code

  • Designed by Yoshiki Okamoto, Konami’s Time Pilot saw an arcade release in 1982.

Arguably one of Konami’s most successful titles, Time Pilot burst into arcades in 1982. Yoshiki Okamoto worked on it secretly, and it proved so successful that a sequel soon followed. In the original, the player flew through five eras, from 1910, 1940, 1970, 1982, and then to the far future: 2001. Aircraft start as biplanes and progress to become UFOs, naturally, by the last level.

Players also rescue other pilots by picking them up as they parachute from their aircraft. The player’s plane stays in the centre of the screen while other game objects move around it. The clouds that give the impression of movement have a parallax style to them, some moving faster than others, offering an illusion of depth.

To make our own version with Pygame Zero, we need eight frames of player aircraft images – one for each direction it can fly. After we create a player Actor object, we can get input from the cursor keys and change the direction the aircraft is pointing with a variable which will be set from zero to 7, zero being the up direction. Before we draw the player to the screen, we set the image of the Actor to the stem image name, plus whatever that direction variable is at the time. That will give us a rotating aircraft.

To provide a sense of movement, we add clouds. We can make a set of random clouds on the screen and move them in the opposite direction to the player aircraft. As we only have eight directions, we can use a lookup table to change the x and y coordinates rather than calculating movement values. When they go off the screen, we can make them reappear on the other side so that we end up with an ‘infinite’ playing area. Add a level variable to the clouds, and we can move them at different speeds on each update() call, producing the parallax effect. Then we need enemies. They will need the same eight frames to move in all directions. For this sample, we will just make one biplane, but more could be made and added.

Our Python homage to Konami’s arcade classic.

To get the enemy plane to fly towards the player, we need a little maths. We use the math.atan2() function to work out the angle between the enemy and the player. We convert that to a direction which we set in the enemy Actor object, and set its image and movement according to that direction variable. We should now have the enemy swooping around the player, but we will also need some bullets. When we create bullets, we need to put them in a list so that we can update each one individually in our update(). When the player hits the fire button, we just need to make a new bullet Actor and append it to the bullets list. We give it a direction (the same as the player Actor) and send it on its way, updating its position in the same way as we have done with the other game objects.

The last thing is to detect bullet hits. We do a quick point collision check and if there’s a match, we create an explosion Actor and respawn the enemy somewhere else. For this sample, we haven’t got any housekeeping code to remove old bullet Actors, which ought to be done if you don’t want the list to get really long, but that’s about all you need: you have yourself a Time Pilot clone!

Here’s Mark’s code for a Time Pilot-style free-scrolling shooter. To get it running on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

Get your copy of Wireframe issue 41

You can read more features like this one in Wireframe issue 41, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 41 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate Time Pilot’s free-scrolling action | Wireframe #41 appeared first on Raspberry Pi.

by Raspberry Pi Blog at Thu Aug 06 2020 10:15:51 GMT+0000 (Coordinated Universal Time)

Wednesday, August 5, 2020


Raspberry Pi Blog

Raspberry Pi keyboards for Japan are here!

When we announced new keyboards for Portugal and the Nordic countries last month, we promised that you wouldn’t have to wait much longer for a variant for Japan, and now it’s here!

Japanese Raspberry Pi keyboard

The Japan variant of the Raspberry Pi keyboard required a whole new moulding set to cover the 83-key arrangement of the keys. It’s quite a complex keyboard, with three different character sets to deal with. Figuring out how the USB keyboard controller maps to all the special keys on a Japanese keyboard was particularly challenging, with most web searches leading to non-English websites. Since I don’t read Japanese, it all became rather bewildering.

We ended up reverse-engineering generic Japanese keyboards to see how they work, and mapping the keycodes to key matrix locations. We are fortunate that we have a very patient keyboard IC vendor, called Holtek, which produces the custom firmware for the controller.

We then had to get these prototypes to our contacts in Japan, who told us which keys worked and which just produced a strange squiggle that they didn’t understand either. The “Yen” key was particularly difficult because many non-Japanese computers read it as a “/” character, no matter what we tried to make it work.

Special thanks are due to Kuan-Hsi Ho of Holtek, to Satoka Fujita for helping me test the prototypes, and to Matsumoto Seiya for also testing units and checking the translation of the packaging.

Get yours today

You can get the new Japanese keyboard variant in red/white from our Approved Reseller, SwitchScience, based in Japan.

If you’d rather your keyboard in black/grey, you can purchase it from Pimoroni and The Pi Hut in the UK, who both offer international shipping.

The post Raspberry Pi keyboards for Japan are here! appeared first on Raspberry Pi.

by Raspberry Pi Blog at Wed Aug 05 2020 11:40:50 GMT+0000 (Coordinated Universal Time)

Tuesday, August 4, 2020


Mozilla

Latest Firefox rolls out Enhanced Tracking Protection 2.0; blocking redirect trackers by default

Today, Firefox is introducing Enhanced Tracking Protection (ETP) 2.0, our next step in continuing to provide a safe and private experience for our users. ETP 2.0 protects you from an advanced tracking technique called redirect tracking, also known as bounce tracking. We will be rolling out ETP 2.0 over the next couple of weeks.

Last year we enabled ETP by default in Firefox because we believe that understanding the complexities and sophistication of the ad tracking industry should not be required to be safe online. ETP 1.0 was our first major step in fulfilling that commitment to users. Since we enabled ETP by default, we’ve blocked 3.4 trillion tracking cookies. With ETP 2.0, Firefox brings an additional level of privacy protection to the browser.

Since the introduction of ETP, ad industry technology has found other ways to track users: creating workarounds and new ways to collect your data in order to identify you as you browse the web. Redirect tracking goes around Firefox’s built-in third-party cookie-blocking policy by passing you through the tracker’s site before landing on your desired website. This enables them to see where you came from and where you are going.

Firefox deletes tracking cookies every day

With ETP 2.0, Firefox users will now be protected against these methods as it checks to see if cookies and site data from those trackers need to be deleted every day. ETP 2.0 stops known trackers from having access to your information, even those with which you may have inadvertently visited. ETP 2.0 clears cookies and site data from tracking sites every 24 hours.

Sometimes trackers do more than just track. They may also offer services you engage with, such as a search engine or social network. If Firefox cleared cookies for these services we’d end up logging you out of your email or social network every day, so we don’t clear cookies from sites you have interacted with in the past 45 days, even if they are trackers. This way you don’t lose the benefits of the cookies that keep you logged in on sites you frequent, and you don’t open yourself up to being tracked indefinitely based on a site you’ve visited once. To read the technical details about how this works, visit our Security Blog post.

What does this all mean for you? You can simply continue to browse the web with Firefox. We are doing more to protect your privacy, automatically. Without needing to change a setting or preference, this new protection deletes cookies that use workarounds to track you so you can rest easy.

Check out and download the latest version of Firefox available here.

The post Latest Firefox rolls out Enhanced Tracking Protection 2.0; blocking redirect trackers by default appeared first on The Mozilla Blog.

by Mozilla at Tue Aug 04 2020 13:05:08 GMT+0000 (Coordinated Universal Time)

Fast Company Recognizes Katharina Borchert as one of the Most Creative Business People

We are proud to share that Katharina Borchert, Mozilla’s Chief Open Innovation Officer, has been named one of the Most Creative People by Fast Company. The award recognizes her leadership on Common Voice and helping to collect and diversify open speech data to build and train voice-enabled applications. Katharina was recognized not just for a groundbreaking idea, but because her work is having a measurable impact in the world.

Among the 74 receiving this award are leaders such as Kade Crockford of the American Civil Liberties Union of Massachusetts, for work leading to banning face surveillance in Boston, and Stina Ehrensvärd, CEO of Yubikey, for the building of WebAuthn, a heightened set of security protocols, a collaboration with Google, Mozilla and Microsoft. The full list also includes vintner Krista Scruggs, dancer and choreographer Twyla Tharp, and Ryan Reynolds: “for delivering an honest message, even when it’s difficult”.

“‘This is a real honor,” said Katharina, “which also reflects the contributions of an incredible alliance of people at Mozilla and beyond. We have a way to go before the full promise of Common Voice is realized. But I’m incredibly inspired by the different communities globally building it together with Mozilla, because language is so important for our identities and for keeping cultural diversity alive in the digital age. Extending the reach of voice recognition to more languages can only open the doors to more innovation and make tech more inclusive.”

Common Voice is Mozilla’s global crowdsourcing initiative to build multilingual open voice datasets that help teach machines how real people speak. Since 2017, we’ve made unparalleled progress in terms of language representation. There’s no comparable initiative, nor any open dataset, that includes as many (also under-resourced) languages. This makes it the largest multilingual public domain voice dataset. In June this year we released an updated edition with more than 7,200 total hours of contributed voice data in 54 languages, including English, German, Spanish, and Mandarin Chinese (Traditional), but also, Welsh, Kabyle, and Kinyarwanda.

The growing Common Voice dataset is unique not only in its size and licence model, but also in its diversity. It is powered by a global community of voice contributors, who want to help build inclusive voice technologies in their own languages, and allow for local value creation.

This is the second award for Mozilla from Fast Company in as many years, and the second time Common Voice has been recognized, after it was honored as a finalist in the experimental category in the Innovation by Design Awards in 2018. To keep up with future developments in Common Voice, follow the project on our Discourse forum.

(Photo Credit: Nick Leoni Photography)

The post Fast Company Recognizes Katharina Borchert as one of the Most Creative Business People appeared first on The Mozilla Blog.

by Mozilla at Tue Aug 04 2020 11:59:29 GMT+0000 (Coordinated Universal Time)


Raspberry Pi Blog

DSLR motion detection with Raspberry Pi and OpenCV

One of our favourite makers, Pi & Chips (AKA David Pride), wanted to see if they could trigger a DSLR camera to take pictures by using motion detection with OpenCV on Raspberry Pi.

You could certainly do this with a Raspberry Pi High Quality Camera, but David wanted to try with his swanky new Lumix camera. As well as a Raspberry Pi and whichever camera you’re using, you’ll also need a remote control. David sourced a cheap one from Amazon, since he knew full well he was going to be… breaking it a bit.

Breaking the remote a bit

When it came to the “breaking” part, David explains: “I was hoping to be able to just re-solder some connectors to the button but it was a dual function button depending on depth of press. I therefore got a set of probes out and traced which pins on the chip were responsible for the actual shutter release and then *carefully* managed to add two fine wires.”

Further breaking

Next, David added Dupont cables to the ends of the wires to allow access to the breadboard, holding the cables in place with a blob of hot glue. Then a very simple circuit using an NPN transistor to switch via GPIO gave remote control of the camera from Python.

Raspberry Pi on the right, working together with the remote control’s innards on the left

David then added OpenCV to the mix, using this tutorial on PyImageSearch. He took the basic motion detection script and added a tiny hack to trigger the GPIO when motion was detected.

He needed to add a delay to the start of the script so he could position stuff, or himself, in front of the camera with time to spare. Got to think of those angles.

David concludes: “The camera was set to fully manual and to a really nice fast shutter speed. There is almost no delay at all between motion being detected and the Lumix actually taking pictures, I was really surprised how instantaneous it was.”

The whole setup mounted on a tripod ready to play

Here are some of the visuals captured by this Raspberry Pi-powered project…

Take a look at some more of David’s projects over at Pi & Chips.

The post DSLR motion detection with Raspberry Pi and OpenCV appeared first on Raspberry Pi.

by Raspberry Pi Blog at Tue Aug 04 2020 12:48:57 GMT+0000 (Coordinated Universal Time)

Monday, August 3, 2020


Raspberry Pi Blog

Raspberry Pi won’t let your watched pot boil

One of our favourite YouTubers, Harrison McIntyre, decided to make the aphorism “a watched pot never boils” into reality. They modified a tabletop burner with a Raspberry Pi so that it will turn itself off if anyone looks at it.

In this project, the Raspberry Pi runs facial detection using a USB camera. If the Raspberry Pi finds a face, it deactivates the burner, and vice versa.

There’s a snag, in that the burner runs off 120 V AC and the Raspberry Pi runs off 5 V DC, so you can’t just power the burner through the Raspberry Pi. Harrison got round this problem using a relay switch, and beautifully explains how a relay manages to turn a circuit off and on without directly interfacing with the circuit at the two minute mark of this video.

The Raspberry Pi working through the switchable plug with the burner

Harrison sourced a switchable plug bar which uses a relay to turn its own switches on and off. Plug the burner and the Raspberry Pi into that and, hey presto, you’ve got them working together via a relay.

The six camera setup

Things get jazzy at the four minute 30 second mark. At this point, Harrison decides to upgrade his single camera situation, and rig up six USB cameras to make sure that no matter where you are when you you look at the burner, the Raspberry Pi will always see your face and switch it off.

Inside the switchable plug

Harrison’s multiple-camera setup proved a little much for the Raspberry Pi 3B he had to hand for this project, so he goes on to explain how he got a bit of extra processing power using a different desktop and an Arduino. He recommends going for a Raspberry Pi 4 if you want to try this at home.

Kit list:

  • Raspberry Pi 4
  • Tabletop burner
  • USB cameras or rotating camera
  • Switchable plug bar
  • All of this software
It’s not just a saying anymore, thanks to Harrison

And the last great thing about this project is that you could invert the process to create a safety mechanism, meaning you wouldn’t be able to wander away from your cooking and leave things to burn.

We also endorse Harrison’s advice to try this with an electric burner and most definitely not a gas one; those things like to go boom if you don’t play with them properly.

The post Raspberry Pi won’t let your watched pot boil appeared first on Raspberry Pi.

by Raspberry Pi Blog at Mon Aug 03 2020 11:51:06 GMT+0000 (Coordinated Universal Time)

Saturday, August 1, 2020


Raspberry Pi Blog

Design game graphics with Digital Making at Home

Join us for Digital Making at Home: this week, young people can explore the graphics side of video game design! Through Digital Making at Home, we invite kids all over the world to code along with us and our new videos every week.

So get ready to design video game graphics with us:

Check out this week’s code-along projects!

And tune in on Wednesday 2pm BST / 9am EDT / 7.30pm IST at rpf.io/home to code along with our live stream session to make a Space Invaders–style shooter game in Scratch!

The post Design game graphics with Digital Making at Home appeared first on Raspberry Pi.

by Raspberry Pi Blog at Sat Aug 01 2020 10:10:00 GMT+0000 (Coordinated Universal Time)

Friday, July 31, 2020


Raspberry Pi Blog

International Space Station Tracker | The MagPi 96

Fancy tracking the ISS’s trajectory? All you need is a Raspberry Pi, an e-paper display, an enclosure, and a little Python code. Nicola King looks to the skies

The e-paper display mid-refresh. It takes about three seconds to refresh, but it’s fast enough for this kind of project

Standing on his balcony one sunny evening, the perfect conditions enabled California-based astronomy enthusiast Sridhar Rajagopal to spot the International Space Station speeding by, and the seeds of an idea were duly sown. Having worked on several projects using tri-colour e-paper (aka e-ink) displays, which he likes for their “aesthetics and low-to-no-power consumption”, he thought that developing a way of tracking the ISS using such a display would be a perfect project to undertake.

“After a bit of searching, I was able to find an open API to get the ISS location at any given point in time,” explains Sridhar. I also knew I wouldn’t have to worry about the data changing several times per second or even per minute. Even though the ISS is wicked fast (16 orbits in a day!), this would still be well within the refresh capabilities of the e-paper display.”

The ISS location data is obtained using the Open Notify API – visit magpi.cc/isslocation to see its current position

Station location

His ISS Tracker works by obtaining the ISS location from the Open Notify API every 30 seconds. It appends this data point to a list, so older data is available. “I don’t currently log the data to file, but it would be very easy to add this functionality,” says Sridhar. “Once I have appended the data to the list, I call the drawISS method of my Display class with the positions array, to render the world map and ISS trajectory and current location. The world map gets rendered to one PIL image, and the ISS location and trajectory get rendered to another PIL image.”

The project code is written in Python and can be found on Sridhar’s GitHub
page: magpi.cc/isstrackercode

Each latitude/longitude position is mapped to the corresponding XY co-ordinate. The last position in the array (the latest position) gets rendered as the ISS icon to show its current position. “Every 30th data point gets rendered as a rectangle, and every other data point gets rendered as a tiny circle,” adds Sridhar.

From there, the images are then simply passed into the e-paper library’s display method; one image is rendered in black, and the other image in red.

Track… star

Little wonder that the response received from friends, family, and the wider maker community has been extremely positive, as Sridhar shares: “The first feedback was from my non-techie wife who love-love-loved the idea of displaying the ISS location and trajectory on the e-paper display. She gave valuable input on the aesthetics of the data visualisation.”

Software engineer turned hardwarehacking enthusiast and entrepreneur, Sridhar Rajagopal is the founder of Upbeat Labs and creator of ProtoStax – a maker-friendly stackable, modular,
and extensible enclosure system.

In addition, he tells us that other makers have contributed suggestions for improvements. “JP, a Hackster community user […] added information to make the Python code a service and have it launch on bootup. I had him contribute his changes to my GitHub repository – I was thrilled about the community involvement!”

Housed in a versatile, transparent ProtoStax enclosure designed by Sridhar, the end result is an elegant way of showing the current position and trajectory of the ISS as it hurtles around the Earth at 7.6 km/s. Why not have a go at making your own display so you know when to look out for the space station whizzing across the night sky? It really is an awesome sight.

Get The MagPi magazine issue 96 — out today

The MagPi magazine is out now, available in print from the Raspberry Pi Press online store, your local newsagents, and the Raspberry Pi Store, Cambridge.

You can also download the directly from PDF from the MagPi magazine website.

Subscribers to the MagPi for 12 months to get a free Adafruit Circuit Playground, or choose from one of our other subscription offers, including this amazing limited-time offer of three issues and a book for only £10!

The post International Space Station Tracker | The MagPi 96 appeared first on Raspberry Pi.

by Raspberry Pi Blog at Fri Jul 31 2020 10:27:36 GMT+0000 (Coordinated Universal Time)

Thursday, July 30, 2020


Raspberry Pi Blog

Amazing science from the winners of Astro Pi Mission Space Lab 2019–20

The team at Raspberry Pi and our partner ESA Education are pleased to announce the winning and highly commended Mission Space Lab teams of the 2019–20 European Astro Pi Challenge!

Mission Space Lab sees teams of young people across Europe design, create, and deploy experiments running on Astro Pi computers aboard the International Space Station. Their final task: analysing the experiments’ results and sending us scientific reports highlighting their methods, results, and conclusions.

One of the Astro Pi computers aboard the International Space Station

The science teams performed was truly impressive, and the reports teams sent us were of outstanding quality. A special round of applause to the teams for making the effort to coordinate writing their reports socially distant!

The Astro Pi jury has now selected the ten winning teams, as well as eight highly commended teams:

And our winners are…

Vidhya’s code from the UK aimed to answer the question of how a compass works on the ISS, using the Astro Pi computer’s magnetometer and data from the World Magnetic Model (WMM).

Unknown from Externato Cooperativo da Benedita, Portugal, aptly investigated whether influenza is transmissible on a spacecraft such as the ISS, using the Astro Pi hardware alongside a deep literature review.

Space Wombats from Institut d’Altafulla, Spain, used normalized difference vegetation index (NDVI) analysis to identify burn scars from forest fires. They even managed to get results over Chernobyl!

Liberté from Catmose College, UK, set out to prove the Coriolis Effect by using Sobel filtering methods to identify the movement and direction of clouds.

Pardubice Pi from SPŠE a VOŠ Pardubice, Czech Republic, found areas of enormous vegetation loss by performing NDVI analysis on images taken from the Astro Pi and comparing this with historic images of the location.

NDVI conversion image by Pardubice Pi team

Reforesting Entrepreneurs from Canterbury School of Gran Canaria, Spain, want to help solve the climate crisis by using NDVI analysis to identify locations where reforestation is possible.

1G5-Boys from Lycée Raynouard, France, innovatively conducted spectral analysis using Fast Fourier Transforms to study low-frequency vibrations of the ISS.

Cloud4 from Escola Secundária de Maria, Portugal, masterfully used a simplified static model and Fourier Analysis to detect atmospheric gravity waves (AGWs).

Cloud Wizzards from Primary School no. 48, Poland, scanned the sky to determine what percentage of the seas and oceans are covered by clouds.

Aguere Team 1 from IES Marina Cebrián, Spain, probed the behaviour of the magnetic field, acceleration, and temperature on the ISS by investigating disturbances, variations with latitude, and temporal changes.

Highly commended teams

Creative Coders, from the UK, decided to see how much of the Earth’s water is stored in clouds by analysing the pixels of each image of Earth their experiment collected.

Astro Jaslo from I Liceum Ogólnokształcące króla Stanisława Leszczyńskiego w Jaśle, Poland, used Reimann geometry to determine the angle between light from the sun that is perpendicular to the Astro Pi camera, and the line segment from the ISS to Earth’s centre.

Jesto from S.M.S Arduino I.C.Ivrea1, Italy, used a multitude of the Astro Pi computers’ capabilities to study NDVI, magnetic fields, and aerosol mapping.

BLOOMERS from Tudor Vianu National Highschool of Computer Science, Romania, investigated how algae blooms are affected by eutrophication in polluted areas.

AstroLorenzini from Liceo Statale C. Lorenzini, Italy used Kepler’s third law to determine the eccentricity, apogee, perigee, and mean tangential velocity of the ISS.

Photo of Italy, Calabria and Sicilia (notice volcano Etna on the top right-hand corner) captured by the AstroLorenzi team

EasyPeasyCoding Verdala FutureAstronauts from Verdala International School & EasyPeasyCoding, Malta, utilised machine learning to differentiate between cloud types.

BHTeamEL from Branksome Hall, Canada, processed images using Y of YCbCr colour mode data to investigate the relationship between cloud type and luminescence.

Space Kludgers from Technology Club of Thrace, STETH, Greece, identified how atmospheric emissions correlate to population density, as well as using NDVI, ECCAD, and SEDAC to analyse the correlation of vegetation health and abundance with anthropogenic emissions.

The teams get a Q&A with astronaut Luca Parmitano

The prize for the winners and highly commended teams is the chance to pose their questions to ESA astronaut Luca Parmitano! The teams have been asked to record a question on video, which Luca will answer during a live stream on 3 September.

ESA astronaut Luca Parmitano aboard the International Space Station

This Q&A event for the finalists will conclude this year’s European Astro Pi Challenge. Everyone on the Raspberry Pi and ESA Education teams congratulates this year’s participants on all their efforts.

It’s been a phenomenal year for the Astro Pi challenge: team performed some great science, and across Mission Space Lab and Mission Zero, an astronomical 16998 young people took part, from all ESA member states as well as Slovenia, Canada, and Malta.

Congratulations to everyone who took part!

Get excited for your next challenge!

This year’s European Astro Pi Challenge is almost over, and the next edition is just around the corner!

Compilation of photographs of Earth taken by an Astro Pi computer

So we invite school teachers, educators, students, and all young people who love coding and space science to join us from September onwards.

Follow our updates on astro-pi.org and social media to make sure you don’t miss any announcements. We will see you for next year’s European Astro Pi Challenge!

The post Amazing science from the winners of Astro Pi Mission Space Lab 2019–20 appeared first on Raspberry Pi.

by Raspberry Pi Blog at Thu Jul 30 2020 11:57:39 GMT+0000 (Coordinated Universal Time)

Sunday, July 26, 2020


Catherine Leung

Summer Fully Online

I taught this summer term fully online This is different from the winter semester where I started with a class in person then went online towards the end the term. I thought it might be a good idea to write about this experience as it is a bit different from the first case.

Preparation and Tools

In the winter semester when we went online we had a week to convert our class. It was pretty fast, however as we already knew our students and had been communicating with them during the term, the way students would find their course materials (notes, assignments, etc.) were generally known to the students already. For me, the only new parts were online delivery and online testing. Once I figured out which platform to use, I was pretty much ready to go.

The summer term however posed a different challenge. I will not have had 9 weeks in person with the students before going online. As such, one of the challenges for me was one of organization. I will start by stating that organization is not my strong suit. Anyone who has ever met me know that my general method of organization is “piles”… throw things into various piles, look for things based on which pile you think it was thrown into… this has never been truly effective but /shrug… This term though, because I was online, I had to be really organized. I took the course led by Seneca’s Teaching and Learning department about teaching remotely. It was a pretty good course and there were a few things that I learned that I thought was really helpful.

  1. Create a welcome video. Because your students are learning remotely make a video to show them how to get started. Mine was posted as an unlisted video on youtube (though learning what I know, next term I’ll have to rethink this for next term… see below). Here is what I posted :
    https://www.youtube.com/watch?v=_KeNE-773eY
  2. Being fully online, I wanted a way to make it clear what materials were being covered and when. I borrowed this technique from the online teaching course as I thought it was a very tidy way to show the things that needed attention each week. Instead of organizing materials by category (reading, assignments, labs etc.) I set up blackboard chronologically and organized the tasks by week. In each week, I posted all the materials that were relevant for that week, all the readings, a blurb about lectures, labs and assignments. The materials were still hosted in my github repo and students were welcome to go to that repo directly. In the repository, the material is organized categorically as opposed to chronologically so students could access the material however they wished. Having the material organized in this way is nice… it makes it clear what materials were being covered, what assessments were required.

Access

In the winter term, when we went online, all my students were still local. That is they lived somewhere nearby…all my students were still in Canada and in same time zone. During the summer term, I had two students who were located far away. To different countries… in different time zones. Ensuring that they could still access the material was crucial.

We seem to take so much for granted that students would have access to what they need when they aren’t located locally that it is always shocking when they don’t. For example, we always think of youtube as being always available, easy to access for everyone. However, during a teachers group seminar, I learned that this was not the case… youtube (and I think google based things in general) is not accessible in China, so if you make your vids available through youtube… this is actually a problem.

Another thing that I thought was pretty widely available was github. As I am teaching software development to students, I think its a good idea for our students to be exposed to github and to learn how to use it effectively. I always set up a github organization for my students and provide private repositories for them via the github education program. I make my own materials available to my students via a github repo (course documents in wiki, code samples in repo). Never had a problem with this… until one of my students went home to be with their family in Iran. Because my student was in Iran, they lost access to their private repository. Work arounds ensued (submitting via other tools, making my course repository public etc.).

For another student, it wasn’t access but rather a 11 hour or so time difference. Even though I held synchronous classes, I recorded my classes so that they could be accessed after class. I think this is pretty important as it is really not reasonable to ask a student to go to class at 3am. Given the schedule for my class, the student would attend when it was reasonable and watch recordings when not. We need to really make this available to them.

TODO

One of the hardest things to do is to guage whether or not my students are following the material properly. In class, you can read the expression on student’s faces whether or not they understand the material. Online, this is simply not possible (most do not use their web cams). One of the things that I still need to work on is to be able guage whether or not the material is absorbed or not. Not too sure how to do this yet… I’m thinking intermitten surveys during class could be useful. I’ll need to think this through on how best to do this though.

by Catherine Leung at Sun Jul 26 2020 17:28:59 GMT+0000 (Coordinated Universal Time)

Thursday, July 23, 2020


Mozilla

Mozilla Joins New Partners to Fund Open Source Digital Infrastructure Research

Today, Mozilla is pleased to announce that we’re joining the Ford Foundation, the Sloan Foundation, and the Open Society Foundations to launch a request for proposals (RFP) for research on open source digital infrastructure. To kick off this RFP, we’re joining with our philanthropic partners to host a webinar today at 9:30 AM Pacific. The Mozilla Open Source Support Program (MOSS) is contributing $25,000 to this effort.

Nearly everything in our modern society, from hospitals and banks to universities and social media platforms, runs on “digital infrastructure” – a foundation of open source code that is designed to solve common challenges. The benefits of digital infrastructure are numerous: it can reduce the cost of setting up new businesses, support data-driven discovery across research disciplines, enable complex technologies such as smartphones to talk to each other, and allow everyone to have access to important innovations like encryption that would otherwise be too expensive.

In joining with these partners for this funding effort, Mozilla hopes to propel further investigation into the sustainability of open source digital infrastructure. Selected researchers will help determine the role companies and other private institutions should play in maintaining a stable ecosystem of open source technology, the policy and regulatory considerations for the long-term sustainability of digital infrastructure, and much more. These aims align with Mozilla’s pledge for a healthy internet, and we’re confident that these projects will go a long way towards deepening a crucial collective understanding of the industrial maintenance of digital infrastructure.

We’re pleased to invite interested researchers to apply to the RFP, using the application found here. The application opened on July 20, 2020, and will close on September 4, 2020. Finalists will be notified in October, at which point full proposals will be requested. Final proposals will be selected in November.

More information about the RFP is available here.

The post Mozilla Joins New Partners to Fund Open Source Digital Infrastructure Research appeared first on The Mozilla Blog.

by Mozilla at Thu Jul 23 2020 15:58:30 GMT+0000 (Coordinated Universal Time)

Tuesday, July 21, 2020


Mozilla

A look at password security, Part III: More secure login protocols

In part II, we looked at the problem of Web authentication and covered the twin problems of phishing and password database compromise. In this post, I’ll be covering some of the technologies that have been developed to address these issues.

This is mostly a story of failure, though with a sort of hopeful note at the end. The ironic thing here is that we’ve known for decades how to build authentication technologies which are much more secure than the kind of passwords we use on the Web. In fact, we use one of these technologies — public key authentication via digital certificates — to authenticate the server side of every HTTPS transaction before you send your password over. HTTPS supports certificate-base client authentication as well, and while it’s commonly used in other settings, such as SSH, it’s rarely used on the Web. Even if we restrict ourselves to passwords, we have long had technologies for password authentication which completely resist phishing, but they are not integrated into the Web technology stack at all. The problem, unfortunately, is less about cryptography than about deployability, as we’ll see below.

Two Factor Authentication and One-Time Passwords

The most widely deployed technology for improving password security goes by the name one-time passwords (OTP) or (more recently) two-factor authentication (2FA). OTP actually goes back to well before the widespread use of encrypted communications or even the Web to the days when people would log in to servers in the clear using Telnet. It was of course well known that Telnet was insecure and that anyone who shared the network with you could just sniff your password off the wire1 and then login with it [Technical note: this is called a replay attack.] One partial fix for this attack was to supplement the user password with another secret which wasn’t static but rather changed every time you logged in (hence a “one-time” password).

OTP systems came in a variety of forms but the most common was a token about the size of a car key fob but with an LCD display, like this:

The token would produce a new pseudorandom numeric code every 30 seconds or so and when you went to log in to the server you would provide both your password and the current code. That way, even if the attacker got the code they still couldn’t log in as you for more than a brief period2 unless they also stole your token. If all of this looks familiar, it’s because this is more or less the same as modern OTP systems such as Google Authenticator, except that instead of a hardware token, these systems tend to use an app on your phone and have you log into some Web form rather than over Telnet. The reason this is called “two-factor authentication” is that authenticating requires both a value you know (the password) and something you have (the device). Some other systems use a code that is sent over SMS but the basic idea is the same.

OTP systems don’t provide perfect security, but they do significantly improve the security of a password-only system in two respects:

  1. They guarantee a strong, non-reused secret. Even if you reuse passwords and your password on site A is compromised, the attacker still won’t have the right code for site B.3
  2. They mitigate the effect of phishing. If you are successfully phished the attacker will get the current code for the site and can log in as you, but they won’t be able to log in in the future because knowing the current code doesn’t let you predict a future code. This isn’t great but it’s better than nothing.

The nice thing about a 2FA system is that it’s comparatively easy to deploy: it’s a phone app you download plus another code that the site prompts you for. As a result, phone-based 2FA systems are very popular (and if that’s all you have, I advise you to use it, but really you want WebAuthn, which I’ll be describing in my next post).

Password Authenticated Key Agreement

One of the nice properties of 2FA systems is that they do not require modifying the client at all, which is obviously convenient for deployment. That way you don’t care if users are running Firefox or Safari or Chrome, you just tell them to get the second factor app and you’re good to go. However, if you can modify the client you can protect your password rather than just limiting the impact of having it stolen. The technology to do this is called a Password Authenticated Key Agreement (PAKE) protocol.

The way a PAKE would work on the Web is that it would be integrated into the TLS connection that already secures your data on its way to the Web server. On the client side when you enter your password the browser feeds it into TLS and on the other side, the server feeds in a verifier (effectively a password hash). If the password matches the verifier, then the connection succeeds, otherwise it fails. PAKEs aren’t easy to design — the tricky part is ensuring that the attacker has to reconnect to the server for each guess at the password — but it’s a reasonably well understood problem at this point and there are several PAKEs which can be integrated with TLS.

What a PAKE gets you is security against phishing: even if you connect to the wrong server, it doesn’t learn anything about your password that it doesn’t already know because you just get a cryptographic failure. PAKEs don’t help against password file compromise because the server still has to store the verifier, so the attacker can perform a password cracking attack on the verifier just as they would on the password hash. But phishing is a big deal, so why doesn’t everyone use PAKEs? The answer here seems to be surprisingly mundane but also critically important: user interface.

The way that most Web sites authenticate is by showing you a Web page with a field where you can enter your password, as shown below:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

When you click the “Sign In” button, your password gets sent to the server which checks it against the hash as described in part I. The browser doesn’t have to do anything special here (though often the password field will be specially labelled so that the browser can automatically mask out your password when you type); it just sends the contents of the field to the server.

In order to use a PAKE, you would need to replace this with a mechanism where you gave the browser your password directly. Browsers actually have something for this, dating back to the earliest days of the Web. On Firefox it looks like this:

 

 

 

 

 

 

 

Hideous, right? And I haven’t even mentioned the part where it’s a modal dialog that takes over your experience. In principle, of course, this might be fixable, but it would take a lot of work and would still leave the site with a lot less control over their login experience than they have now; understandably they’re not that excited about that. Additionally, while a PAKE is secure from phishing if you use it, it’s not secure if you don’t, and nothing stops the phishing site from skipping the PAKE step and just giving you an ordinary login page, hoping you’ll type in your password as usual.

None of this is to say that PAKEs aren’t cool tech, and they make a lot of sense in systems that have less flexible authentication experiences; for instance, your email client probably already requires you to enter your authentication credentials into a dialog box, and so that could use a PAKE. They’re also useful for things like device pairing or account access where you want to start with a small secret and bootstrap into a secure connection. Apple is known to use SRP, a particular PAKE, for exactly this reason. But because the Web already offers a flexible experience, it’s hard to ask sites to take a step backwards and PAKEs have never really taken off for the Web.

Public Key Authentication

From a security perspective, the strongest thing would be to have the user authenticate with a public private key pair, just like the Web server does. As I said above, this is a feature of TLS that browsers actually have supported (sort of) for a really long time but the user experience is even more appalling than for builtin passwords.4 In principle, some of these technical issues could have been fixed, but even if the interface had been better, many sites would probably still have wanted to control the experience themselves. In any case, public key authentication saw very little usage.

It’s worth mentioning that public key authentication actually is reasonably common in dedicated applications, especially in software development settings. For instance, the popular SSH remote login tool (replacing the unencrypted Telnet) is commonly used with public key authentication. In the consumer setting, Apple Airdrop usesiCloud-issued certificates with TLS to authenticate your contacts.

Up Next: FIDO/WebAuthn

This was the situation for about 20 years: in theory public key authentication was great, but in practice it was nearly unusable on the Web. Everyone used passwords, some with 2FA and some without, and nobody was really happy. There had been a few attempts to try to fix things but nothing really stuck. However, in the past few years a new technology called WebAuthn has been developed. At heart, WebAuthn is just public key authentication but it’s integrated into the Web in a novel way which seems to be a lot more deployable than what has come before. I’ll be covering WebAuthn in the next post.


  1. And by “wire” I mean a literal wire, though such sniffing attacks are prevalent in wireless networks such as those protected by WPA2 
  2. Note that to really make this work well, you also need to require a new code in order to change your password, otherwise the attacker can change your password for you in that window. 
  3. Interestingly, OTP systems are still subject to server-side compromise attacks. The way that most of the common systems work is to have a per-user secret which is then used to generate a series of codes, e.g., truncated HMAC(Secret, time) (see RFC6238). If an attacker compromises the secret, then they can generate the codes themselves. One might ask whether it’s possible to design a system which didn’t store a secret on the server but rather some public verifier (e.g., a public key) but this does not appear to be secure if you also want to have short (e.g., six digits) codes. The reason is that if the information that is used to verify is public, the attacker can just iterate through every possible 6 digit code and try to verify it themselves. This is easily possible during the 30 second or so lifetime of the codes. Thanks to Dan Boneh for this insight. 
  4. The details are kind of complicated here, but just some of the problems (1) TLS client authentication is mostly tied to certificates and the process of getting a certificate into the browser was just terrible (2) The certificate selection interface is clunky (3) Until TLS 1.3, the certificate was actually sent in the clear unless you did TLS renegotiation, which had its own problems, particularly around privacy.

Update: 2020-07-21: Fixed up a sentence.

The post A look at password security, Part III: More secure login protocols appeared first on The Mozilla Blog.

by Mozilla at Tue Jul 21 2020 00:06:49 GMT+0000 (Coordinated Universal Time)

Monday, July 20, 2020


David Humphrey

On Boring Paths

I was chatting with a student today who couldn't figure out why his scripts wouldn't run.  He was doing everything correctly, but still the commands failed with a bizarre error.  I asked to see the output.  Here's an approximation:

PS C:\Users\student\OneDrive\Computer Programming\Assignments & Tests\assignment4> npm run server
'Tests\assignment4\node_modules\.bin\' is not recognized as an internal or external command,
operable program or batch file.
internal/modules/cjs/loader.js:969
  throw err;
  ^

The error here stems from the pathname, which contains an &.  In many shells, including PowerShell, this will often get interpreted as part of the shell command vs. the filename.  You can see the error message above referring to the portion of the path after the &, ignoring everything before it.

It's not an obvious problem until, and unless you know about using & in shell commands.  I also think that the proliferation of non-filesystem naming in tools like Google Docs and the like, contributes to a sense that a pathname can be any text you want.  After all, typing an & in the title of a Word Doc isn't a problem, why should it be in a file path?

Paths, filenames, URLs, and other technical naming formats are not the place to get creative with your use of the keyboard.  You want to keep things boring:  lowercase (looking at you, Apple), no spaces (looking at you, npm), no special characters.  Keep it short.  Keep it simple.  Keep it boring.

by David Humphrey at Mon Jul 20 2020 15:11:30 GMT+0000 (Coordinated Universal Time)

Wednesday, July 15, 2020


Mozilla

Mozilla Puts Its Trusted Stamp on VPN

Starting today, there’s a VPN on the market from a company you trust. The Mozilla VPN (Virtual Private Network) is now available on Windows, Android and iOS devices. This fast and easy-to-use VPN service is brought to you by Mozilla, the makers of Firefox, and a trusted name in online consumer security and privacy services.

See for yourself how the Mozilla VPN works:

 

The first thing you may notice when you install the Mozilla VPN is how fast your browsing experience is. That’s because the Mozilla VPN is based on modern and lean technology, the WireGuard protocol’s 4,000 lines of code, is a fraction in size of legacy protocols used by other VPN service providers.

You will also see an easy-to-use and simple interface for anyone who is new to VPN, or those who want to set it and get onto the web.

With no long-term contracts required, the Mozilla VPN is available for just $4.99 USD per month and will initially be available in the United States, Canada, the United Kingdom, Singapore, Malaysia, and New Zealand, with plans to expand to other countries this Fall.

In a market crowded by companies making promises about privacy and security, it can be hard to know who to trust. Mozilla has a reputation for building products that help you keep your information safe. We follow our easy to read, no-nonsense Data Privacy Principles which allow us to focus only on the information we need to provide a service. We don’t keep user data logs.

We don’t partner with third-party analytics platforms who want to build a profile of what you do online. And since the makers of this VPN are backed by a mission-driven company you can trust that the dollars you spend for this product will not only ensure you have a top-notch VPN, but also are making the internet better for everyone.

Simple and easy-to-use switch

Last year, we beta tested our VPN service which provided encryption and device-level protection of your connection and information on the Web. Many users shared their thoughts on why they needed this service.

Some of the top reasons users cited for using a VPN:

  • Security for all your devices Users are flocking to VPNs for added protection online. With Mozilla VPN you can be sure your activity is encrypted across all applications and websites, whatever device you are on.
  • Added protection for your private information – Over 50 percent of VPN users in the US and UK said that seeking protection when using public wi-fi was a top reason for choosing a VPN service.
  • Browse more anonymously – Users care immensely about being anonymous when they choose to. A VPN is a key component as it encrypts all your traffic and protects your IP address and location.
  • Communicate more securely – Using a VPN can give an added layer of protection, ensuring every conversation you have is encrypted over the network.

In a world where unpredictability has become the “new normal,” we know that it’s more important than ever for you to feel safe, and for you to know that what you do online is your own business.

Check out the Mozilla VPN and download it from our website,  Google Play store or Apple App store.

*Updated July 27, 2020 to reflect the availability of Mozilla VPN on iOS devices

The post Mozilla Puts Its Trusted Stamp on VPN appeared first on The Mozilla Blog.

by Mozilla at Wed Jul 15 2020 14:16:59 GMT+0000 (Coordinated Universal Time)

Monday, July 13, 2020


Mozilla

A look at password security, Part II: Web Sites

In part I, we took a look at the design of password authentication systems for old-school multiuser systems. While timesharing is mostly gone, most of us continue to use multiuser systems; we just call them Web sites. In this post, I’ll be covering some of the problems of Web authentication using passwords.

As I discussed previously, the strength of passwords depends to a great extent on how fast the attacker can try candidate passwords. The nature of a Web application inherently limits the velocity at which you can try passwords quite a bit. Even ignoring limits on the rate which you can transmit stuff over the network, real systems — at least well managed ones — have all kinds of monitoring software which is designed to detect large numbers of login attempts, so just trying millions of candidate passwords is not very effective. This doesn’t mean that remote attacks aren’t possible: you can of course try to log in with some of the obvious passwords and hope you get lucky, and if you have a good idea of a candidate password, you can try that (see below), but this kind of attack is inherently somewhat limited.

Remote compromise and password cracking

Of course, this kind of limitation in the number of login attempts you could make also applied to the old multiuser systems and the way you attack Web sites is the same: get a copy of the password file and remotely crack it.

The way this plays out is that somehow the attacker exploits a vulnerability in the server’s system to compromise the password database.1 They can then crack it offline and try to recover people’s passwords. Once they’ve done that, they can then use those passwords to log into the site themselves. If a site’s password database is stolen, their strongest defense is to reset everyone’s password, which is obviously really inconvenient, harms the site’s brand, and runs the risk of user attrition, and so doesn’t always happen.

To make matters worse, many users use the same password on multiple sites, so once you have broken someone’s password on one site, you can then try to login as them on other sites with the same password, even if the user’s password was reset on the site which was originally compromised. Even though this is an online attack, it’s still very effective, because password reuse is so common (this is one reason why it’s a bad idea to reuse passwords).

Password database disclosure is unfortunately quite a common occurrence, so much so that there are services such as Firefox Monitor and Have I been pwned? devoted to letting users know when some service they have an account on has been compromised.

Assuming a site is already following best practices (long passwords, slow password hashing algorithms, salting, etc.) then the next step is to either make it harder to steal the password hash or to make the password hash less useful. A good example here is the Facebook system described in this talk by Alec Muffett (famous for, among other things, the Crack password cracker). The system uses multiple layers of hashing, one of which is a keyed hash [technically, HMAC-SHA256] performed on a separate, hardened, machine. Even if you compromise the password hash database, it’s not useful without the key, which means you would also have to compromise that machine as well.2

Another defense is to use one-time password systems (often also called two-factor authentication systems). I’ll cover those in a future post.

Phishing

Leaked passwords aren’t the only threat to password authentication on Web sites. The other big issue is what’s called phishing. In the basic phishing attack, the attacker sends you an e-mail inviting you to log into your account. Often this will be phrased in some scary way like telling you your account will be deleted if you don’t log in immediately. The e-mail will helpfully contain a link to use to log in, but of course this link will go not to the real site but to the attacker’s site, which will usually look just like the real site and may even have a similar domain name (e.g., mozi11a.com instead of mozilla.com). When the user clicks on the link and logs in, the attacker captures their username and password and can then log into the real site. Note that having users use good passwords totally doesn’t help here because the user gives the site their whole password.

Preventing phishing has proven to be a really stubborn challenge because, well, people are not as suspicious as they should be and it’s actually fairly hard on casual examination to determine whether you are on the right site. Most modern browsers try to warn users if they are going to known phishing sites (Firefox uses the Google Safe Browsing service for this). In addition, if you use a password manager, then it shouldn’t automatically fill in your password on a phishing site because password managers key off of the domain name and just looking similar isn’t good enough. Of course, both of these defenses are imperfect: the lists of phishing sites can be incomplete and if users don’t use password managers or are willing to manually cut and paste their passwords, then phishing attacks are still possible.3

Beyond Passwords

The good news is that we now have standards and technologies which are better than simple passwords and are more resistant to these kinds of attacks. I’ll be talking about them in the next post.


  1. A more fatal security issue occurs when application developers mistakenly write plaintext user passwords to debug logs. This allows the attacker to target the logging system and get immediate access to passwords without having to do any sort of computational work. 
  2. The Facebook system is actually pretty ornate. At least as of 2014 they had four separate layers: MD5, HMAC-SHA1 (with a public salt), HMAC-SHA256(with a secret key), and Scrypt, and then HMAC-SHA256 (with public salt) again, Muffet’s talk and this post do a good job of providing the detail, but this design is due to a combination of technical requirements. In particular, the reason for the MD5 stage is that an older system just had MD5-hashed passwords and because Facebook doesn’t know the original password they can’t convert them to some other algorithm; it’s easiest to just layer another hash on. 
  3. This is an example of a situation in which the difficulty of implementing a good password manager makes the problem much worse. Sites vary a lot in how they present their password dialogs and so password managers have trouble finding the right place to fill in the password. This means that users sometimes have to type the password in themselves even if there is actually a stored password, teaching them bad habits which phishers can then exploit. 

The post A look at password security, Part II: Web Sites appeared first on The Mozilla Blog.

by Mozilla at Mon Jul 13 2020 16:36:40 GMT+0000 (Coordinated Universal Time)

Sustainability needs culture change. Introducing Environmental Champions.

Sustainability is not just about ticking a few boxes by getting your Greenhouse Gas emissions (GHG) inventory, adopting goals for reduction and mitigation, and accounting in shape. Any transformation towards sustainability also needs culture change.

In launching Mozilla‘s Sustainability Programme, our Environmental Champions are a key part of driving this organisational culture change.

Recruiting, training, and working with a first cohort of Environmental Champions has been a highlight of my job in the last couple of months. I can’t wait to see their initiatives taking root across all parts of Mozilla.

We have 14 passionate and driven individuals in this first cohort. They are critical amplifiers who will nudge each and every one us to incorporate sustainability into everything we do.

 

What makes people Champions?

“We don’t need hope, we need courage: The courage to change and impact our own decisions.”

This was among the top take-aways of our initial level-setting workshop on climate change science. In kicking off conversations around how to adjust our everyday work at Mozilla to a more sustainability-focused mindset, it was clear that hope won’t get us to where we need to be. This will require boldness and dedication.

Our champions volunteer their time for this effort. All of them have full-time roles and it was important to structure this process so that it is inviting, empowering, and impactful. To me this meant ensuring manager buy-in and securing executive sponsorship to make sure that our champions have the support to grow professionally in their sustainability work.

In the selection of this cohort, we captured the whole breadth of Mozilla: representatives from all departments, spread across regions, including office as well as remote workers, people with different tenure and job levels, and a diversity in roles. Some are involved with our GHG assessment, others are design thinkers, engineers, or programme managers, and yet others will focus on external awareness raising.

 

Responsibilities and benefits

In a nutshell, we agreed on these conditions:

Environmental Champions are:

  • Engaged through a peer learning platform with monthly meetings for all champions, including occasional conversations with sustainability experts. We currently alternate between four time zones, starting at 8am CEST (UTC+2), CST (UTC+8), EDT (UTC-4), PDT (UTC-7), respectively to equally spread the burden of global working hours.
  • Committed to spend about 2-5h each month supporting sustainability efforts at Mozilla.
  • Committed to participate in at least 1 initiative a year.
  • Committed to regularly share initiatives they are driving or participating in.
  • Dedicated to set positive examples and highlight sustainability as a catalyst of innovation.
  • Set up to provide feedback in their teams/departments, raise questions and draw attention to sustainability considerations.

The Sustainability team:

  • Provides introductory training on climate science and how to incorporate it into our everyday work at Mozilla. Introductory training will be provided at least once a year or as soon as we have a critical mass of new champions joining us on this journey.
  • Commits to inviting champions for initial feedback on new projects, e.g. sustainability policy, input on reports, research to be commissioned.
  • Regularly amplifies progress and successes of champions’ initiatives to wider staff.
  • May offer occasional access to consultants, support for evangelism (speaking, visibility, support for professional development) or other resources, where necessary and to the extent possible.

 

Curious about their initiatives?

We are just setting out and we already have a range of ambitious, inspiring projects lined up.

Sharmili, our Global Space Planner, is not only gathering necessary information around the impact of our global office spaces, she will also be leading on our reduction targets for real estate and office supplies. She puts it like this: “Reducing our Real Estate Footprint and promoting the 3 R’s (reduce, reuse, recycle) is as straight-forward as it can be tough in practice. We’ll make it happen either way.”

Ian, a machine learning engineer, is looking at Pocket recommendation guidelines and is keen to see more collections like this Earth Day 2020 one in the future.

Daria, Head of Product Design in Emerging Technologies, says: “There are many opportunities for designers to develop responsible technologies and to bring experiences that prioritize sustainability principles. It’s time we unlocked them.” She is planning to develop and apply a Sustainability Impact Assessment Tool that will be used in decision-making around product design and development.

We’ll also be looking at Firefox performance and web power usage, starting with explorations for how to better measure the impact of our products. DOM engineer, Olli will be stewarding these.

And the behind the scenes editorial support thinking through content, timing, and outreach? That’s Daniel for you.

We’ll be sharing more initiatives and the progress they are all making as we move forward. In the meantime, do join us on our Matrix channel to continue the conversation.

The post Sustainability needs culture change. Introducing Environmental Champions. appeared first on The Mozilla Blog.

by Mozilla at Mon Jul 13 2020 07:11:15 GMT+0000 (Coordinated Universal Time)

Thursday, July 9, 2020


Mozilla

Thank you, Julie Hanna

Over the last three plus years, Julie Hanna has brought extensive experience on innovation processes, global business operations, and mission-driven organizations to her role as a board member of Mozilla Corporation. We have deeply appreciated her contributions to Mozilla throughout this period, and thank her for her time and her work with the board.

Julie is now stepping back from her board commitment at Mozilla Corporation to focus more fully on her longstanding passion and mission to help pioneer and bring to market technologies that meaningfully advance social, economic and ecological justice, as evidenced by her work with Kiva, Obvious Ventures and X (formerly Google X), Alphabet’s Moonshot Factory. We look forward to continuing to see her play a key role in shaping and evolving purpose-driven technology companies across industries.

We are actively looking for a new member to join the board and seeking candidates with a range of backgrounds and experiences.

The post Thank you, Julie Hanna appeared first on The Mozilla Blog.

by Mozilla at Thu Jul 09 2020 18:33:28 GMT+0000 (Coordinated Universal Time)

Wednesday, July 8, 2020


Mozilla

A look at password security, Part I: history and background

Today I’d like to talk about passwords. Yes, I know, passwords are the worst, but why? This is the first of a series of posts about passwords, with this one focusing on the origins of our current password systems starting with log in for multi-user systems.

The conventional story for what’s wrong with passwords goes something like this: Passwords are simultaneously too long for users to memorize and too short to be secure.

It’s easy to see how to get to this conclusion. If we restrict ourselves to just letters and numbers, then there are about 26 one character passwords, 212 two character passwords, etc. The fastest password cracking systems can check about 236 passwords/second, so if you want a password which takes a year to crack, you need a password of 10 characters long or longer.

The situation is actually far worse than this; most people don’t use randomly generated passwords because they are hard to generate and hard to remember. Instead they tend to use words, sometimes adding a number, punctuation, or capitalization here and there. The result is passwords that are easy to crack, hence the need for password managers and the like.

This analysis isn’t wrong, precisely; but if you’ve ever watched a movie where someone tries to break into a computer by typing passwords over and over, you’re probably thinking “nobody is a fast enough typist to try billions of passwords a second”. This is obviously true, so where does password cracking come into it?

How to design a password system

The design of password systems dates back to the UNIX operating system, designed back in the 1970s. This is before personal computers and so most computers were shared, with multiple people having accounts and the operating system being responsible for protecting one user’s data from another. Passwords were used to prevent someone else from logging into your account.

The obvious way to implement a password system is just to store all the passwords on the disk and then when someone types in their password, you just compare what they typed in to what was stored. This has the obvious problem that if the password file is compromised, then every password in the system is also compromised. This means that any operating system vulnerability that allows a user to read the password file can be used to log in as other users. To make matters worse, multiuser systems like UNIX would usually have administrator accounts that had special privileges (the UNIX account is called “root”). Thus, if a user could compromise the password file they could gain root access (this is known as a “privilege escalation” attack).

The UNIX designers realized that a better approach is to use what’s now called password hashing: instead of storing the password itself you store what’s called a one-way function of the password. A one-way function is just a function H that’s easy to compute in one direction but not the other.1 This is conventionally done with what’s called a hash function, and so the technique is known as “password hashing” and the stored values as “password hashes”

In this case, what that means is you store the pair: (Username, H(Password)). [Technical note: I’m omitting salt which is used to mitigate offline pre-computation attacks against the password file.] When the user tries to log in, you take the password they enter P and compute H(P). If H(P) is the same as the stored password, then you know their password is right (with overwhelming probability) and you allow them to log in, otherwise you return an error. The cool thing about this design is that even if the password file is leaked, the attacker learns only the password hashes.2

Problems and countermeasures

This design is a huge improvement over just having a file with cleartext passwords and it might seem at this point like you didn’t need to stop people from reading the password file at all. In fact, on the original UNIX systems where this design was used, the /etc/passwd file was publicly readable. However, upon further reflection, it has the drawback that it’s cheap to verify a guess for a given password: just compute H(guess) and compare it to what’s been stored. This wouldn’t be much of an issue if people used strong passwords, but because people generally choose bad passwords, it is possible to write password cracking programs which would try out candidate passwords (typically starting with a list of common passwords and then trying variants) to see if any of these matched. Programs to do this task quickly emerged.

The key thing to realize is that the computation of H(guess) can be done offline. Once you have a copy of the password file, you can compare your pre-computed hashes of candidate passwords against the password file without interacting with the system at all. By contrast, in an online attack you have to interact with the system for each guess, which gives it an opportunity to rate limit you in various ways (for instance by taking a long time to return an answer or by locking out the account after some number of failures). In an offline attack, this kind of countermeasure is ineffective.

There are three obvious defenses to this kind of attack:

  • Make the password file unreadable: If the attacker can’t read the password, they can’t attack it. It took a while to do this on UNIX systems, because the password file also held a lot of other user-type information that you didn’t want kept secret, but eventually that got split out into another file in what’s called “shadow passwords” (the passwords themselves are stored in /etc/shadow. Of course, this is just the natural design for Web-type applications where people log into a server.
  • Make the password hash slower: The cost of cracking is linear in the cost of checking a single password, so if you make the password hash slower, then you make cracking slower. Of course, you also make logging in slower, but as long as you keep that time reasonably short (below a second or so) then users don’t notice. The tricky part here is that attackers can build specialized hardware that is much faster than the commodity hardware running on your machine, and designing hashes which are thought to be slow even on specialized hardware is a whole subfield of cryptography.
  • Get people to choose better passwords: In theory this sounds good, but in practice it’s resulted in enormous numbers of conflicting rules about password construction. When you create an account and are told you need to have a password between 8 and 12 characters with one lowercase letter, one capital letter, a number and one special character from this set — but not from this other set — what they’re hoping you will do is create a strong passwords. Experience suggests you are pretty likely to use Passw0rd!, so the situation here has not improved that much unless people use password managers which generate passwords for them.

The modern setting

At this point you’re probably wondering what this has to do with you: almost nobody uses multiuser timesharing systems any more (although a huge fraction of the devices people use are effectively UNIX: MacOS is a straight-up descendent of UNIX and Linux and Android are UNIX clones). The multiuser systems that people do use are mostly Web sites, which of course use usernames and passwords. In future posts I will cover password security for Web sites and personal devices.


  1. Strictly speaking we need the function not just to be one-way but also to be preimage resistant, meaning that given H(P) it’s hard to find any input p such that H(p) == H(P)
  2. For more information on this, see Morris and Thompson for quite readable history of the UNIX design. One very interesting feature is that at the time this system was designed generic hash functions didn’t exist, and so they instead used a variant of DES. The password was converted into a DES key and then used to encrypt a fixed value. This is actually a pretty good design and even included a feature designed to prevent attacks using custom DES hardware. However, it had the unfortunate property that passwords were limited to 8 characters, necessitating new algorithms that would accept a longer password. 

The post A look at password security, Part I: history and background appeared first on The Mozilla Blog.

by Mozilla at Wed Jul 08 2020 20:55:23 GMT+0000 (Coordinated Universal Time)

Saturday, June 27, 2020


Yoosuk Sim

Creating a patch for GNU GCC using Git

Overview and Target Audience

The GNU GCC project followed a blend of a traditional method with contemporary git tools when it comes to contributing code, making the experience unique from some other, git-based projects. This blog post will explore different aspects of the process, helpful commands, and various scripts that would make the experience more pleasent for new contributors. While this blog aims to help new contributors get acustomed to the GNU GCC code culture and make contributing easier, it must be stressed that this is not in any way in-depth exploration of the process. This should help you put your first foot forward; the community will help you take the rest of the steps from that point on. This post also assumes the user is in a POSIX environment (e.g. Linux, FreeBSD).

Git and GNU

As stated in this phoronix post, GNU GCC made a full transition to git early 2020. As of this writing, the community seems to be adjusting to the new tools. GNU GCC hosts its own git server as well as a Github mirror. As far as contributing code goes, making a PR is not the way to go; rather, the contributor is expected to make a patch file and submit it to their gcc-patches@gcc.gnu.org mailing list, where the patch will go through the process of being reviewed, fixed, and, hopefully, committed. Fortunately, git provides some useful tools for these purposes.

Useful Git commands

Some of the more famous git commands include: checkout, branch, commit, pull, push, clone, and merge. This post will not cover these commands as their detail instructions and examples are fairly easy to find. Instead, this post will cover some of the less-famous commands that are nonetheless very useful for projects such as GNU GCC.

format-patch

This git command can create an email-ready text file, complete with a from-address, subject line, a message, and a patch content. By default, it would go through each git commit and create a patch for each commit. Two of its noteable flags are:

  • -o: defines the destination of the output,
  • -M: defines the branch the current branch should compare to.
The command uses the commit title as the subject line, preceding it with [PATCH]. Different string may be used to preceed it if needed. The commit message body will be a part of the email body, along with the diff for patch.

send-email

This git command is useful for developers witihout a proper mailing client that would process the patch file. In particular, GMail is not very suitable for the process as it may handle its own formatting differently when content is pasted through the browser client. This command also allows easy way to script the process so that the patch can be sent with minimal user interaction. Some noteable flags are --to and --cc. They can be used multiple times to designate the receipient of the email.

NOTE: The --to and --cc flags also exits for the format-patch command and maybe used at the patch creation process if desired.

To use this feature with GMail, the account holder must supply additional information in the ~/.gitconfig file.

[user]
email = your-email-id-here@gmail.com
name = your-github-id-here

[sendemail]
smtpEncryption = tls
smtpServer = smtp.gmail.com
smtpUser = your-email-id-here@gmail.com
smtpServerPort = 587
smtpPass = your-gmail-App-password-here

Due to GMail security policy, putting your regular GMail password would not work for smtpPass: this password must be explicitely generated. Please refer to the related document for more information.

Python

The GNU GCC project makes use of python in various tasks. In particular, the contrib/ folder contains a number of useful python scripts that helps with the patch submission process. The Python community suggests using a virtual environment for each Python project by using commands like pipenv. This will keep each Python project dependencies separate from each other while also keep a document of Python dependencies, called Pipfile, for consistant project behavior across different development environment.

Setting up Pipfile

The GNU GCC unfortunately does not come with its own Pipfile. It is also not in its .gitignore file, too. For now, to keep a copy for use with the local repository, add the following line to the .git/info/exclude file:

Pipfile*
. This will ignore both the Pipfile and Pipfile.lock. Please also install pipenv using your distro's package management program. Then, go to the root of the project directory and run:
$ pipenv install requests unidef termcolor 

This should install three packages necessary to run the two very useful Python scripts in the contrib/ directory.

Now, from the root of the directory, you can run scripts like mklog.py like this:

 $ pipenv run contrib/mklog.py path-to-patch-file-here 

Some useful scripts

contrib/mklog.py: ChangeLog generator

Each patch must contain a passage called ChangeLog as a part of the body of the message, preceeding the patch information. While the exact formatting of the ChangeLog is beyond the scope of the blog, thankfully, the contrib/ provides a useful script, called mklog.py, that assists with the process. This script accepts an existing patch and outputs a formatted, skeletal body approprpriate for the patch. It usually require more input from the user to fill out the specific changes the patch introduces to the code base. Note that this introduces a catch-22 situation: the output of the script should go into the body of the patch, ideally when the patch is being made, but the script cannot run without an existing patch as an input. This issue will later be addressed with a script to streamline the process.

contrib/check_GNU_style.py: Styling checker

Like all well-maintained project, GNU GCC also has styling notes that the code should follow. Unfortunately, the project doesn't seem to come with tools like prettier that can be used to automate the proper formatting. They do have the next best thing: a format checker for an existing patch: contrib/check_GNU_style.py. It takes a patch as a required argument and outputs a report of styling suggestions to the standard out. It may optionally create a file of a condensed version of styling suggestion as needed. By using this script, the contributor may then manually make styling changes to their code.

Helpful Scripts

As programs and scripts that adheres to the Unix philosophy, many of the above can do one thing and one thing really well. Each of them are well suited as a single step among multiple steps necessary to achieve the goal. Fortunately, this makes them perfect component for scripts to streamline the process. Following scripts are proof-of-concept to handle a given situation. It works well for me; I hope it serves you well. Don't forget to customize it to fit your development environment.

For reference, the directory structure is as follows:


gcc/ # GCC local project directory that holds all GCC-related folders.
- gcc/ # GCC local git repo directory, i.e. GCCTOP.
- patch/ # patch destination.
- gcc-build # GCC build directory where that runs configuration and make.

gcccheckpatch.sh

This script is more for quality-of-life improvements and contains no real additional logic. Please note the environmental variable GCCTOP refers to the root of the gcc local repo.

gcccheckpatch.sh:
#!/usr/bin/env sh

pipenv run $GCCTOP/contrib/check_GNU_style.py $@

gccmklog.sh

This script accepts one argument: the path to the patch. It will then generate the ChangeLog formatted message and insert it to the body of the patch. It uses /tmp to store intermediate files that should automatically deleted by the script after the execution.

#!/usr/bin/env bash


TARGETPATCH=$1
TEMPLOGFILE=/tmp/changelog
TEMPPATCHFILE=/tmp/temp.patch
PATCHCUTOFFLINE=$(sed -n '/^---$/=' $TARGETPATCH)
CUTOFFTOPLINE=$(( PATCHCUTOFFLINE - 1))
DATE=$(date "+%Y-%m-%d")
NAME="YOUR-NAME HERE"
EMAIL="your@email.here"

ChangeLogSetup(){
printf "" > $TEMPLOGFILE &&\
printf "" > $TEMPPATCHFILE &&\
return 0 || return 1
}

ChangeLogGen(){
pipenv run $GCCTOP/contrib/mklog.py $TARGETPATCH > $TEMPLOGFILE &&\
return 0||return 2
}

ChangeLogPatchUpdate(){
sed -n "1,${CUTOFFTOPLINE} p" $TARGETPATCH >> $TEMPPATCHFILE &&\
echo "" >> $TEMPPATCHFILE &&\
echo "${DATE} ${NAME} <${EMAIL}>" >> $TEMPPATCHFILE &&\
echo "" >> $TEMPPATCHFILE &&\
cat $TEMPLOGFILE >> $TEMPPATCHFILE &&\
sed -n "${PATCHCUTOFFLINE},$ p" $TARGETPATCH >> $TEMPPATCHFILE &&\
return 0||return 3
}

ChangeLogPatchReplace(){
mv $TARGETPATCH ${TARGETPATCH}.bk && cp $TEMPPATCHFILE $TARGETPATCH && return 0 || return 4
}

ChangeLogCleanup(){
/bin/rm $TEMPLOGFILE $TEMPPATCHFILE && return 0 || return 5
}

ChangeLog(){
ChangeLogSetup \
&& ChangeLogGen \
&& ChangeLogPatchUpdate \
&& ChangeLogPatchReplace \
&& ChangeLogCleanup \
&& $EDITOR $TARGETPATCH \
&& echo "Success: $TARGETPATCH" || echo "Something went wrong with $TARGETPATCH: exit code $?" >&2
}

main(){

ChangeLog

}

main

gccmksquashpatch.sh

This patch is a proof-of-concept script that aims to first create single patch for all changes made in the branch. This is akin to making a squash-merge on a PR. As such, it is suitable when the branch introduces small changes to the code base. It also assumes that gcc local repo is inside of a dedicated gcc folder that contains another folder, called "patch". As a note, as the GNU GCC project suggests making build directory outside of the local repo, having patch on the parent directory of local repo sounds reasonable. It outputs each patch into its own folder whose name is defined by the

date
command.

This branch accepts one argument for $TARGET branch. The patch will first generate a squash-merged branch called

patch/feature-branch-name
by comparing the $BASE branch against the $TARGET branch. and outputs a patch to the patch folder. It can be modified to then send the patch over email using the other script.

#!/usr/bin/env sh

BASE=base-branch-name
TARGET=$1
PATCHBRANCH=patch/feature-branch-name
DATE=$(date "+%Y%m%d-%H%M%S")
OUTPUT=$GCCTOP/../patch/$DATE

PatchGen(){
git checkout $BASE &&\
git branch -D $PATCHBRANCH &&\
git checkout -b $PATCHBRANCH &&\
git merge --squash $TARGET &&\
git commit &&\
git format-patch -o $HOME/project/gcc/patch/$DATE -M $BASE &&\
return 0 || echo "Something went wrong in PatchGen"
return 1
}

main(){

PatchGen \
&& $HOME/bin/gccmklog.sh $OUTPUT/*patch \ #since there's just one squashed, let's run mklog
#&& $HOME/bin/gsocsendemail.sh $OUTPUT/*patch #uncomment this line if this command would send the patch email

git checkout $TARGET
}

main

gccsendpatch.sh

Once the patch is created, this simple script will send it to the recipients of the patch. As the blog is about sending the patch to gcc-patches mailing list, the appropriate address is added to --cc flag. It also provides one last chance to look over the messages if needed.

This script accepts one argument, which is the path name to the patch.

Provided the ~/.gitconfig is set up properly, the patch should be received by the community in short order.

#!/usr/bin/env sh

git send-email --to=first-recipient@email.address --to=second-recipient@email.address --cc=gcc-patches@gcc.gnu.org --confirm=always $@

Some useful information

by Yoosuk Sim at Sat Jun 27 2020 14:20:01 GMT+0000 (Coordinated Universal Time)

Tuesday, June 16, 2020


Josue Quilon Barrios

hexV


hexV is a little tool that I've been working on for a while.
It's a tiny viewer that shows the content of any file in hexadecimal.
Something like this:



I first thought about making hexV while I was working on another project that involved parsing and rendering large images without using any official libraries like libpng. I needed something that I could use to learn and understand how images are structured and stored depending on the type (PNG, JPEG, BMP...), and also that'd help me debug the parsers I wrote for that project.

hexV did the trick. Instead of having to go through endless console output to chase bugs, I could simply open the processed image with hexV and compare it to the values read by the parsers. I'm pretty sure that using hexV saved me a ton of debugging time.

The first version of hexV had builds for both windows and linux, but I recently added some deep changes to the linux version so I removed the windows build until I can make some time to add those changes to it.

I have some interesting features in mind, like file type detection, that I'd like to add sometime soon. 
Oh, and this is an open-source project, so if you find hexV interesting and feel like collaborating, take a look at the issues and hack away!


by Josue Quilon Barrios at Tue Jun 16 2020 02:26:00 GMT+0000 (Coordinated Universal Time)

Friday, June 5, 2020


Calvin Ho

Attempt at Creating a Clone of Adventure Capitalist

After about 3 weeks working on this project, I'm kind of done. Built with React/Node/Redis/SocketIO I learned a lot. The reason I say I'm kind of finished is because unless I overhaul the whole backend of the code, I don't think I can get it working 100%. ... I know it looks awful haha.


The project was fun and challenging, there weren't any guidelines on how the project should be built aside from it being written in Javascript/Typescript. I initially tried using Typescript, but it gave me headaches with the difference in import exporting. This is something I'll probably need to learn more about.

You can check out a version of a working game here (not mine). The hardest part about creating this clone is regarding hiring a manager. Hiring a manager takes care of clicking for one of these shops. The number on the right shows the "cooldown" of the button before it can be pressed again. A manager will auto this process so whenever the shop is off cooldown, it should be clicked, the timer is actually reflects how much cooldown time is left, there should also be a progress bar providing a visual representation.

There were 2 issues to think about:

  1. The initial max cooldown of the Lemonade shop is 500ms, when a player purchases a certain amount of the shop, the max cooldown is actually halved or and it becomes exponential with each threshold reached. So potentially this shop could be firing a request every 7.8ms (500 / 64), what tool should I use for this?
  2. How should I manage the auto clicking by managers?
  3. The managed shops should also be running even if the window isn't open, players should be informed of how much cash they have earned while the window was closed
I looked around a bit and decided to use websockets, specifically Socket.IO. I thought using the traditional HTTP/GET request would destroy the backend since there could be a ton of requests being sent.

The second issue I kept thinking of was how to create the auto function for managing a shop + keeping track of how much time was left AND having all this be reflected on the front-end. After thinking about this for a few days and getting nowhere, I reached out to @humphd who suggested using the TTL(time to live) + Pub/Sub functionalities of Redis. This was pretty cool as it had me researching about keyspace notifications for Redis. That's all for now... I may blog more about this later.

by Calvin Ho at Fri Jun 05 2020 23:45:35 GMT+0000 (Coordinated Universal Time)

Friday, May 29, 2020


Corey James

React Practice With GraphQL

Hello, and Welcome to my blog! Recently I have been working through some tutorials from Thinkster.io. In this blog post, I will be reviewing my experience going through the React and Redux tutorial. I completed the tutorial using the GraphQL API I made. The tutorial uses a REST API, so I had to make lots …

Continue reading "React Practice With GraphQL"

by Corey James at Fri May 29 2020 02:41:20 GMT+0000 (Coordinated Universal Time)

Monday, May 25, 2020


Ray Gervais

Writing Golang Tests for an Alcoholic REST API

func TestHelloWorld(t *testing.T) {} is so well engrained into my muscle memory

Continuing on with last week's Athenaeum post, I mentioned that I wanted to explore easily overlooked processes or topics that junior developers don't always have the chance to dive into. The original intent being to allow the project to grow in such a way that it would demonstrate through it's iterative history a step-by-step example of taking a small project all the way to the big world of Public Clouds, Containers, and other infrastructure goodies. Along with that, I also wanted to explore software development patterns and testing practices. In this article, I want to explain what's been done so far: writing back-end unit tests and exploring the world of code coverage!

How To Test Golang

If you follow my Twitter, you'll see that I've been a huge fan of the Learning Golang with Tests online book. The course I'd recommend to anyone who's interested in software development because aside from teaching Go's idioms, it also teaches fantastic Test Driven Development (TDD) semantics and the art of writing DRY (Don't Repeat Yourself) code. I'd argue, that even if you forget about Golang or adapt it the lessons to a different language, the wisdom found in the lessons are invaluable.

One thing that's explained in the first chapter, the Hello, World! of tests if you will, is Go comes with its own testing capabilities in the standard library. Any file with the naming scheme of *_test.go is viewed as a test file, not a run-time file (which allows for us to distinctly run a folder structure with go run . vs go test)! Likewise,

Your First Test

Let's use this main.go file example for this section, which will enable us to test the Greet function. Having testable functions and components (compared to testing the entire program) is essential for good software design in my opinion.

// main.go
package main

import "fmt"

// Greet concatenates the given argument with a predetermined 'Hello, '
func Greet(name string) string {
  return "Hello, " + name
}

func main() {
  fmt.Println(Greet("Unit Testing 101"))
}

We could write the following test!

// main_test.go
package main

import (
    "testing"
    "github.com/stretchr/testify/assert"
)

func TestGreet(t *testing.T) {
    expected := "Hello, World!"
    received := Greet("World!")

  assert.Equals(t, expected, received)
}

So, what exactly does this do? Let's break down the process.

  1. I'm leveraging the testify (specifically, the assert sub-package) package by Stretchr. This is a common library used in Golang Testing for assert.* patterns.
  2. All test functions start with Test, which most IDEs will allow you to interact with and test on demand.
  3. From all the tutorials that I've seen around Golang testing, we're encouraged to create the expected struct/variable that will be referenced and compared later.
  4. Received is the variable that will store the result of our function call.
  5. Let's compare the result compared to what we're expecting to have.

With the above steps, you've written your first Golang Unit Test! The Greet function that we wrote is stupidly basic (and also a pure function, which is a nice little hat tip to my functional programming interests!), but allows for a great example of composing testable functions. The next question is, where do we go from here? What else could you test with the same concept? Here's a brief list of scenario's that I'll go into in greater detail later which could be tested in similar patterns:

  • Scenario: Your function parses a JSON response, and returns an error object if there were any issues.
    • Test: When provided a valid JSON response, our function should return nil
  • Scenario: Your function returns a corresponding struct that has the same ID as what's passed in, along with an error object.
    • Test: When provided a invalid (negative) ID, our function should return an empty struct, and error object.

Once we have tests for such scenarios written and passing, the next question should be: What else can we test?

A Brief Introduction to Test Driven Development

I had wrote about TDD and NodeJS in 2017, where it was all the rage between my Open Source classes and Internship in Mississauga, but figured it would be best to explain here from the perspective of writing and testing a REST API. Martin Fowler explains Test Driven Development as,

Test-Driven Development (TDD) is a technique for building software that guides software development by writing tests. It was developed by Kent Beck in the late 1990's as part of Extreme Programming. In essence you follow three simple steps repeatedly:

  • Write a test for the next bit of functionality you want to add.
  • Write the functional code until the test passes.
  • Refactor both new and old code to make it well structured.

You continue cycling through these three steps, one test at a time, building up the functionality of the system. Writing the test first, what XPE2 calls Test-First Programming, provides two main benefits. Most obviously it's a way to get SelfTestingCode, since you can only write some functional code in response to making a test pass. The second benefit is that thinking about the test first forces you to think about the interface to the code first. This focus on interface and how you use a class helps you separate interface from implementation.

The most common way that I hear to screw up TDD is neglecting the third step. Refactoring the code to keep it clean is a key part of the process, otherwise you just end up with a messy aggregation of code fragments. (At least these will have tests, so it's a less painful result than most failures of design.)

So, where does this come into play for our previous example if I wanted to follow a TDD approach? Let's iterate on possible test cases for our first hypothetical scenario.

As a reminder: Your function parses a JSON response, and returns an error object if there were any issues.

We could test the following (for example):

  • Test: When provided a valid JSON response, our function should return nil
  • Test: When provided an invalid JSON response, our function should return the parse error.
  • Test: When provided a malformed JSON string, our function should return the parse error.
  • Test: When provided a JSON response which doesn't map to our struct, our function should return the mapping error.

We're testing various scenarios, some plausible and well-worth being tested, and others more far-fetched which help to provide sanity to the "what if" scenarios. Now, you mentioned something about testing a Alcoholic REST API?

Writing REST API Tests with TDD

Going forward, I'll be referencing Athenaeum's main.go, and with it's rapid updates will omit including an already out-of-date version here. Currently, our main.go serves as the REST API router with the following CRUD (create, read, update, delete) routes:

  • GET /
  • GET /books/
  • GET /books/:id
  • POST /books/
  • PATCH /books/:id
  • DELETE /books/:id

With TDD, I went about writing the following test scenario's prior to writing the code itself:

  • SCENARIO: Valid GET / request should return "Hello, World!"
  • SCENARIO: Valid GET /books/ request against an empty database should return 0 results.
  • SCENARIO: Valid GET /books/ request against a populated database should return all books.
  • SCENARIO: Valid GET /books/:id/ request with ID against populated database should return specific book.
  • SCENARIO: Valid GET /books/:id/ request with Invalid ID against a populated database should return a "Record not found!" error

So we've covered the common use-cases for the first three routes, and that last one looks rather interesting. Let's break it down before moving forward.


// imports ()

// Helper Function
func performRequest(r http.Handler, method, path string) *httptest.ResponseRecorder {
    req, _ := http.NewRequest(method, path, nil)
    w := httptest.NewRecorder()
    r.ServeHTTP(w, req)
    return w
}

// Test Cases

func TestBooksCRUD(t *testing.T) {
    t.Run("Retrieve Non-Existing ID", func(t *testing.T) {
          w := performRequest(router, "PATCH", "/books/-2")

          assert.Equal(t, http.StatusBadRequest, w.Code)
          assert.Equal(t, "{\"error\":\"Record not found!\"}", w.Body.String())
    })
}
  1. I skipped the imports, but you can reference the [public version]https://github.com/raygervais/Athenaeum/blob/master/src/backend/main_test.go) for the complete source.
  2. I picked up the performRequest function from Craig Childs' Golang Testing - JSON Responses with Gin tutorial. Makes for far cleaning code reuse.
  3. TestBooksCRUD has the familiar test function signature, so this should be familiar.
  4. t.Run allows us to define sub-tests which relate to the parent's context. I'm leveraging this concept to group tests which relate to each other together instead of creating dedicated functions for each.
  5. w is the response from our request, which is defined and executed using the helper function performRequest.
  6. The last two lines are your typical assert.Equal patterns, ensuring that we are receiving the correct response code (400), and error: "Record not found!".

All of the tests that I listed for our REST API utilize similar code to compare and check against each condition. Test Driven Development shouldn't stop at the "common" tests, but instead reach out to patterns and scenario's which no one expects. Essentially, I view TDD as a way to write witty tests which cover the greater use-cases that keep some SRE (system reliability engineers) up at night. Dave had taught us in OSD500 to throw as many tests as we wanted at our functions, essentially trying to bend and break the inputs in a test of how resilient our code was. Likewise, Learning Go With Tests goes over how adding use-cases, types, and off-chance scenario's allows us to investigate truly how robust our functions and handlers are. So with that, let's list all the scenario's that I came up with for our main_test.go file against the REST API:

  • SCENARIO: Valid GET / request should return "Hello, World!"
  • SCENARIO: INVALID POST/ should return a 404 code and "404 page not found".
  • SCENARIO: INVALID DELETE / should return a 404 code and "404 page not found".
  • SCENARIO: INVALID PATCH / should return a 404 code and "404 page not found".
  • SCENARIO: Valid GET /books/ request against an empty database should return 0 results.
  • SCENARIO: Valid GET /books/ request against a populated database should return all books.
  • SCENARIO: Valid GET /books/:id/ request with ID against populated database should return specific book.
  • SCENARIO: Invalid GET /books/:id/ request with negative ID against a populated database should return a "Record not found!" error.
  • SCENARIO: Invalid POST /books/ without models.CreateBook JSON mapping should return a 400 code and error message.
  • SCENARIO: Valid POST /books/ with the latest Harry Potter novel should return a 200 code and book.
  • SCENARIO: Invalid POST /books/ with an array of []models.Book should return a 400 code and error message.
  • SCENARIO: VALID PATCH /books/:id/ request with valid ID, and an updated models.UpdateBook struct that has a modified title should return 200 and the updated book.
  • SCENARIO: INVALID PATCH /books/:id/ request with valid ID, but no body should return 400 and error message.
  • SCENARIO: INVALID PATCH /books/:id/ without an id should return 400 and error message.
  • SCENARIO: INVALID PATCH /books/ should return a 404 code and "404 page not found".
  • SCENARIO: INVALID PATCH /books/:id/ request with Valid ID, and incorrect JSON body should return 400 and JSON mapping error message.
  • SCENARIO: Valid DELETE /books/:id/ with a valid ID should return 200.
  • SCENARIO: Invalid DELETE /books/:id/ with a invalid ID should return 400 and "Record not found!" error.
  • SCENARIO: INVALID DELETE /books/ should return a 404 code and "404 page not found".

What do most of these tests look like? At the time of writing main_test.go contained the following:

func TestBooksCRUD(t *testing.T) {
    dbTarget := "test.db"

    router, db := SetupRouter(dbTarget)

    db.DropTableIfExists(&amp;models.Book{}, "books")
    db = models.SetupModels(dbTarget)
    defer db.Close()

    t.Run("Create Empty DB", func(t *testing.T) {
        w := performRequest(router, "GET", "/books/")

        assert.Equal(t, http.StatusOK, w.Code)
    })

    t.Run("Retrieve Nonexistent ID on Empty DB", func(t *testing.T) {

        w := performRequest(router, "GET", "/book/2")

        assert.Equal(t, http.StatusNotFound, w.Code)
    })

    t.Run("Populate DB with Harry Potter Set", func(t *testing.T) {
        books := []string{
            "Harry Potter and The Philosopher's Stone",
            "Harry Potter and The Chamber of Secrets",
            "Harry Potter and The Prisoner of Azkaban",
            "Harry Potter and The Goblet of Fire",
            "Harry Potter and The Order of The Phoenix",
            "Harry Potter and The Half-Blood Prince",
            "Harry Potter and The Deathly Hallows",
        }

        for _, book := range books {

            payload, _ := json.Marshal(models.CreateBookInput{
                Author: "J. K. Rowling",
                Title:  book,
            })

            req, err := http.NewRequest("POST", "/books/", bytes.NewReader(payload))
            req.Header.Set("Content-Type", "application/json")

            w := httptest.NewRecorder()
            router.ServeHTTP(w, req)

            assert.Equal(t, nil, err)
            assert.Equal(t, http.StatusOK, w.Code)
        }
    })

    t.Run("Retrieve Existing ID on Populated DB", func(t *testing.T) {
        w := performRequest(router, "GET", "/books/2")

        expected := models.Book{
            Author: "J. K. Rowling",
            ID:     2,
            Title:  "Harry Potter and The Chamber of Secrets",
        }

        var response models.Book
        err := json.Unmarshal([]byte(w.Body.String()), &amp;response)

        assert.Nil(t, err)
        assert.Equal(t, http.StatusOK, w.Code)
        assert.Equal(t, expected, response)
    })

    t.Run("Attempt Updating Non-Existing ID", func(t *testing.T) {
        w := performRequest(router, "PATCH", "/books/-2")

        assert.Equal(t, http.StatusBadRequest, w.Code)
        assert.Equal(t, "{\"error\":\"Record not found!\"}", w.Body.String())
    })

    t.Run("Updated Existing ID with Invalid Values", func(t *testing.T) {
        payload, _ := json.Marshal(map[int]string{
            2: "Harry Potter",
            3: "JK Rowling",
            4: "22",
        })

        req, err := http.NewRequest("PATCH", "/books/-2", bytes.NewReader(payload))
        req.Header.Set("Content-Type", "application/json")

        w := httptest.NewRecorder()

        router.ServeHTTP(w, req)

        assert.Equal(t, nil, err)
        assert.Equal(t, http.StatusBadRequest, w.Code)
    })

    t.Run("Update Existing ID on Populated DB", func(t *testing.T) {
        payload, _ := json.Marshal(models.UpdateBookInput{
            Title: "Harry Potter and The Weird Sisters",
        })

        req, err := http.NewRequest("PATCH", "/books/6", bytes.NewReader(payload))
        req.Header.Set("Content-Type", "application/json")

        w := httptest.NewRecorder()
        router.ServeHTTP(w, req)

        assert.Equal(t, nil, err)
        assert.Equal(t, http.StatusOK, w.Code)
    })

    t.Run("Get Updated Book from Populated DB", func(t *testing.T) {
        expected := models.Book{
            Author: "J. K. Rowling",
            Title:  "Harry Potter and The Weird Sisters",
            ID:     6,
        }

        w := performRequest(router, "GET", "/books/6")

        var response models.Book
        err := json.Unmarshal([]byte(w.Body.String()), &amp;response)

        assert.Nil(t, err)
        assert.Equal(t, http.StatusOK, w.Code)
        assert.Equal(t, expected, response)
    })

    t.Run("Delete Invalid Book from Populated DB", func(t *testing.T) {
        w := performRequest(router, "DELETE", "/books/-1")
        assert.Equal(t, http.StatusBadRequest, w.Code)
    })

    t.Run("Delete Without ID Book from Populated DB", func(t *testing.T) {
        w := performRequest(router, "DELETE", "/books/")
        assert.Equal(t, http.StatusNotFound, w.Code)
        assert.Equal(t, "404 page not found", w.Body.String())
    })

    t.Run("Delete valid Book from Populated DB", func(t *testing.T) {
        w := performRequest(router, "DELETE", "/books/6")

        assert.Equal(t, "{\"data\":true}", w.Body.String())
        assert.Equal(t, http.StatusOK, w.Code)
    })
}

Next Steps

So once you have your API routes covered, what's next? I opted to (stubbornly) go deeper. I thought, if each bit of logic should have a test, then why don't we also replicate many of the tests at the controller level using a mock-router. Why, you may be asking? Well in my mind, this is to not so-much as duplicate the main level API tests, but instead test against the controller logic and their input / outputs. It's another layer of sanity checks which I'd like to think help ensure the functions are being updated without breaking known functionality. The book controller can be referenced here, but an example of the UpdateBook (and it's helper function RetrieveBookByID looks like this, which I learned about thanks to LogRocket's tutorial) appears as:


package controllers
// imports ()

// RetrieveBookByID is a helper function which returns a boolean based on success to find book
func RetrieveBookByID(db *gorm.DB, c *gin.Context, book *models.Book) bool {
    if err := db.Where("id = ?", c.Param("id")).First(&amp;book).Error; err != nil {
        c.JSON(http.StatusBadRequest, gin.H{"error": errRecordNotFound})
        return false
    }

    return true
}


// UpdateBook called by PATCH /books/:id
// Update a book
func UpdateBook(c *gin.Context) {
    db := c.MustGet("db").(*gorm.DB)

    // Get model if exist
    var book models.Book
    if !RetrieveBookByID(db, c, &amp;book) {
        return
    }

    // Validate input
    var input models.UpdateBookInput
    if err := c.ShouldBindJSON(&amp;input); err != nil {
        c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
        return
    }

    db.Model(&amp;book).Updates(input)

    c.JSON(http.StatusOK, book)
}

When writing the tests, I came into a major issue when having to deal with more advanced requests: how does one mock a request body? Without learning how to do this, I wouldn't be able to test the CreateBook, UpdateBook functions which I would argue is a big deal. So, two hours later of Googling and trial-and-error led me to this nugget of magical goodness (which also is where my tweets became sporadic as I embarked on the quest for 100% code coverage with my new found powers):

func SetupContext(db *gorm.DB) (*httptest.ResponseRecorder, *gin.Context) {
    w := httptest.NewRecorder()
    c, _ := gin.CreateTestContext(w)
    c.Set("db", db)

    return w, c
}

func SetupRequestBody(c *gin.Context, payload interface{}) {
    reqBodyBytes := new(bytes.Buffer)
    json.NewEncoder(reqBodyBytes).Encode(payload)

    c.Request = &amp;http.Request{
        Body: ioutil.NopCloser(bytes.NewBuffer(reqBodyBytes.Bytes())),
    }
}

t.Run("Update Valid Book", func(t *testing.T) {
        w, c := SetupContext(db)

        payload := models.CreateBookInput{
            Title: "Hermione Granger and The Wibbly Wobbly Timey Wimey Escape",
        }

        SetupRequestBody(c, payload)
        c.Params = []gin.Param{gin.Param{Key: "id", Value: "3"}}

        UpdateBook(c)

        var response models.Book
        err := json.Unmarshal([]byte(w.Body.String()), &amp;response)

        assert.Equal(t, 200, w.Code)
        assert.Equal(t, nil, err)
        assert.Equal(t, payload.Title, response.Title)
})

For clarity (and DRY principals), the most important piece of code is the SetupRequestBody function which allows us to essentially create the Request with it's body. Doing so allows our function UpdateBook(c) that we are testing to pickup the correct context, which is the request with the mocked body, headers, etc. For those who've been Googling this just as frantically as I was, I hope this helps!

Resources

by Ray Gervais at Mon May 25 2020 00:00:00 GMT+0000 (Coordinated Universal Time)

Tuesday, May 19, 2020


Adam Pucciano

Python Series: Finishing Touches

This is part of my ongoing series to explore the uses of Python to create a real-time dashboard display for industrial machinery. Please read the preceding parts first!

Part 3: Finishing Touches 

Dashboard gauges indicating the Machine’s cycle time and efficiency

Things were finally coming together! It was now a matter of integrating more machines , optimizing page loading. I was also in a good position to add a few more quality of life features to the application, all of which were readily handled in Python.

FTP: Machines were still logging information to binary files , and had accessible storage via FTP. Creating a small manager class using Python’s standard ftplib was a great alternative option to view historic information or download it all as an archive. I would then call on this manager script to help me load information during POST/GET requests.

File Viewer/Upload: Some machines in the field do not have internet access, and have a long running history of its production written in binary files. The machine’s locale files are the only thing that keeps a record of its production efficiency. Reusing components from the statistics dashboard, a simple dialog to upload files working with a file viewer emerged that allows the user to upload and interact with the report and display results the same way connected machines can be viewed. This is could extend usefulness for files that originated from a simulation snapshot.

Redis and Docker: One of of the many Django integrations, and modern tools every web service is looking to take advantage of in order to deliver cohesive, fluid content to their users. Used by company tech giants like Twitter, Github and Pintrest these two technologies work together to cache and session information and reduce query and server consumption due to usage. It would take another series to cover these two technologies in depth, both of which I only really had a definitions worth of understanding before this project began. But by using the Django help documentation in conjunction with Docker installation guides for Redis made it really straight forward to incorporate this within a few days. Django-plotly-dash makes use of channels (channels_redis) for live updating. It’s amazing how all these libraries start to chain together and very quickly help deploy what is needed much more professionally. Which at first the syntax was actually too simple, it really confused me on how it all worked. (I suspect a lot of magic in the underlying framework). With a few changes to the settings, and reading some introduction tutorials, I had made my application capture data through a Redis cache running in a Docker container.

CSS/Bootstrap: This project gave me a chance to work on my css and javascript (front end) skills. I am certain I still need a lot of work in this area, but every day I see small improvements to the overall look and feel of the interface. I know that it with more practice, it will soon seem to transform into a fluid, dynamic UI.

Python was very portable. By using a ‘requirements.txt’ file I could easily move my development environment to a new machine. Remember also to spin up a virtual environment first, and it works very well with the Django manager (manage.py). To learn all of this new content may be daunting at first, but stick with it and I promise you it will be worth it (It always is)! This Python approach to the dashboard became easier and easier to make real. Each module was supported by ingenious packages made from the community.

This project has been a a great experience, and a great stepping stone to continue to improve my own skills while at the same time providing a fancy new piece of software.

This project seemed to evolve at a rapid pace, and I too feel like I had leveled up because of it. During this journey, I was able to..

  • Break out of a comfort zone and begin to master Python for software development.
  • Lead my own research and create a development plan for my own application
  • Learn more about industrial manufacturing and injection molding industry
  • Read through, understand documentation and become a part of various Python communities
  • Touch upon and learn OPCUA, understanding how the protocol establishes communication between machinery
  • Brush up on my Linux skills, initializing and hosting an internal web server using Ubuntu
  • Learn how to create a Docker container running Redis
  • Work on my writing skills to communicate all this wonderful news!

Thanks for reading!

Adam

 

Please feel free to contact me about any of the sections you’ve read. I’d love to discuss it further or clarify any points you as the reader come across.

by Adam Pucciano at Tue May 19 2020 17:49:02 GMT+0000 (Coordinated Universal Time)


Ray Gervais

Why You Need A Dog's Opinion For Code Review

An introduction to Hound, The Code Review CI Tool and Testing in Golang

Conceptualizing Athenaeum

James Inkster write up the best description of Athenaeum, which I'll share here.

Sharing your favorite books has never been easier. What Athenaeum brings to the table is the ability to login and create a repository of your favorite books, you will be able to share your favorite books with other users on the platform, and find users who are into similar books as yourself. You'll be able to see trending books so you can always stay up-to date with what books might be enjoyable for you to read based on your likes. Athenaeum will implement a minimalistic design to allow for functional UI design choices.

The idea originally came about when I was interested in creating a full-stack application which would enable contributors to test and learn about modern development practices, technologies, and languages. I opted for the following stack after James proposed Athenaeum as the idea:

  • Back-end Services: Golang
  • Database Persistence: SQLite
  • Front-end Interfaces: Node / ReactJS

We're Going To Make It Awesome [And We're Going To Follow All The Best Practices!]

Or at least, that's what we all think at the start of the project. Every code-base has their respective hacks, workarounds, and inconsistencies when not kept in check. I imagine that consistent code quality in each pull request is the goal, but we all know how easy it is for items to slip past our reviews. That's why I wanted to explore adding Code Analysis tooling from the very start to the project for both the front-end and back-end. The Open Web Application Security Project (OWASP) is a nonprofit foundation which lists tooling, strengths, and weaknesses such as:

Some tools are starting to move into the IDE. For the types of problems that can be detected during the software development phase itself, this is a powerful phase within the development life cycle to employ such tools, as it provides immediate feedback to the developer on issues they might be introducing into the code during code development itself. This immediate feedback is very useful, especially when compared to finding vulnerabilities much later in the development cycle.

Enter the first tool, Hound!

Being the Guardian of the Pull Request Gates

So what is Hound? Their official website states the following:

Hound comments on code quality and style issues, allowing you and your team to better review and maintain a clean code base.

The list of supported languages and linters are:

  • JavaScript / TypeScript / CoffeeScript
  • Ruby
  • Python
  • Swift
  • PHP
  • SASS
  • Golang
  • Elixir
  • Haml

The site's support pages are rather stark, but provide an excellent jumping ground in the configuration examples for those who don't want to dive into the source code. Did I mention that it's open source?!

I find Hound to be great spare pair of eyes during code reviews, since it enables a few convenience workflows:

  • Shutdown all Github Actions if errors / issues are found
  • Remove stale comments from previous Hound scans
  • Consume eslint.json, tslint.json, and various configuration files for lint customization per-language
  • Share lint customizations among projects

Golang's internal tooling comes with a built-in linter, which means that your editor -paired with it's corresponding golang-supported plugin- should complain when there's syntax and linting issues. I found even VIM-GO highlighted the same error in the screenshot above, which means that Hound itself isn't providing a critical service for the back-end aside from uprooting ignored warnings / errors (such as missing comments above function implementations). NodeJS in contrast, has quite the many ways that developer write JavaScript code. By leveraging a ESLint configuration eslintrc such as this one, we're able to ensure that code base is adhering to a set of defined guidelines in the less opinionated languages.

{
  "env": {
    "commonjs": true,
    "es6": true,
    "node": true
  },
  "extends": ["eslint:recommended", "plugin:prettier/recommended"],

  "parserOptions": {
    "ecmaVersion": 2020
  },
  "rules": {
    "no-console": "error",
    "require-atomic-updates": "error",
    "default-case": "error",
    "default-case-last": "error",
    "default-param-last": "error",
    "require-await": "error",
    "camelcase": [
      "error",
      {
        "properties": "never"
      }
    ],
    "comma-dangle": [
      "error",
      {
        "arrays": "always",
        "objects": "always"
      }
    ],
    "comma-spacing": [
      "error",
      {
        "before": false,
        "after": true
      }
    ],
    "quotes": ["error", "double"]
  }
}

Setting The Hound Loose

For Athenaeum's case, we only need to leverage golint and eslint for our services so far, so that configuration would look like this:

That last line enables us to tell GitHub Actions to stop building / testing!

golint:
  enabled: true

eslint:
  enabled: true
  config_file: .eslintrc

fail_on_violations: true

Next Steps

One of the goals of this project was to explore and learn through trial and error modern practices, tooling, and services which help enrich a code base, Hound was the easiest to add; a single source of truth when it came to code style enforcement. From there, I want to explore adding other tools in the CI chain, and in the Release chain such as:

Resources

by Ray Gervais at Tue May 19 2020 00:00:00 GMT+0000 (Coordinated Universal Time)

Saturday, May 16, 2020


Bowei Yao

A depressing scene

My wife and I went to A&W today to get some burgers. This particular A&W is located inside a convenience store.

We finished ordering and we were just looking around the shelves in the convenience store while waiting for our burgers to be made. Suddenly a man in line called out to us.

“Hey, you speak her language?”

“What?”

He gestured towards a short lady leaning over the checkout counter of the convenience store.

“You speak her language?”

“I don’t know. What’s going on?”

“You wanna help her?”

It took me a while to realize the situation. The lady was probably holding up the line for a while now due to miscommunications.

So I approached her while keeping my distance from everyone, since you know, this is the coronavirus special period. She’s short, fairly aged (I would say around 50-60 at least), and Asian.

I speak Mandarin, so I asked her in that. She answered. It didn’t take long before we figured out that her credit card is expired. So she said that she’s going to go back to her car to get a new card. I translated that to the cashier and we let the man, who happens to be the next customer in line to get his order processed.

Now, up to this point, everything is fine. Everybody’s happy and nothing is wrong.

We got our A&W burgers and walked out of the convenience store, and this is what I see:

The Asian lady was standing next to an Audi SUV with the driver’s door open. Sitting in the driver’s seat is a young man wearing black sunglasses, arguing with the lady.

“What do you mean it doesn’t work?”

“The card won’t work, I’ve tried many times.”

“How can you be so stupid?”

“…”

I thought this was a scene that only appears in literature, but I guess literature does take its roots from real life after all.

For those of you don’t know, this is a stereotypical type of behaviour and mannerism exhibited by spoiled fuerdais. You may google this term now, as it is now officially enlisted into the English language. It is a derogative term aimed at the children of the nouveau rich from China.

So what are the things wrong with this picture?

Let’s unravel bit by bit. Now, it’s fairly simple to see that it’s argument between a mother and a son.

Perhaps you’ve seen some spoiled kids in your life – perhaps your neighbor’s kids or your relative’s kids. But this is a new level of spoiledness – this is a level of spoiledness which you have never seen before. This is a level which you would not accept nor agree with under any circumstances. This level of spoiledness is rarely exhibited by western parents, nor tolerated.

However, this type of overprotective parent/kid pairings are common in China, where the parents not only perform everything for their kid, but also think on their kid’s behalf. The kid’s future, the kid’s school, the kid’s extra-curricular activities, what the kid wants to do in his/her free time, etc. everything. The parents are thinking and planning all of that for the kid, and also acting them out, regardless of the kid’s opinion.

… until the kid has reached the age of 20, and in more extreme cases, age of 30 and beyond. The parents has trouble letting go because it has transformed into a long time habit. The kid has gotten used to it, and in the back of his/her mind, he/she thinks they deserve it, and everything should be just the way it is, taken for granted.

Therefore, in this case, you see an elderly lady, walking around wobbly, running errands, while the young boy sits in the shiny car texting on his phone. There are no sense of respect in the words that come out of his mouth towards his mother, and no subjective action exhibited from his body. He does not think to leave the car to help his mother, or go in his mother’s place to do a simple purchase, despite the fact that her mother has great difficulty in communications.

by Bowei Yao at Sat May 16 2020 02:25:51 GMT+0000 (Coordinated Universal Time)

Friday, May 15, 2020


Adam Pucciano

Python Series: From app to dashboard

This write-up is part of my ongoing Python series, please check out the introduction or part 2 before continuing!

Part 3: Going Online

So now existed an application where one could take old machine data files, run them through a sort of ‘processor’, and see the results. It was a nice little applet, but it was time to take the next step. This sort of application needed to be better served. Its access should be more flexible and that’s when I made the pivot towards a web-based platform.

Plotly-dash had partly integrated a web-view and navigation system where the programmer  could define a route to view the dashboard content. The issue I had with this implementation was that I could not share this view among multiple machines. The location definition was to be provided directly on the script that managed the dashboard.

if __name__ == “__main__”:

app.run_server(host=’127.0.0.1′, port=’8888′)

This snippet was at the bottom of my dashboard pages (which was actually super useful during testing), which basically indicated that if this is the main entry point of program, invoke a server to run on the specified parameters. This was useful for testing because I could spin up a really simple dashboard in order to try out another approach or feature of the plotly-dash library. But I was almost done this phase, I needed to change where this was being called, and move it up the hierarchy.

An alternative was to allow the user to change the data via a drop-down menu, but I had already deduced many issues moving forward with this approach, namely that the dashboard page would still be the main entry point for the application. I wanted to hit a unique URL that pertained to a particular machine, and give very little responsibility to the user to correctly select the reports to display. Plus, I still needed to define a central hub, a destination where the user could log in and navigate to a particular machine and make some decisions on what to view. Plotly-dash did not really support this. I needed to take a step back to research and experiment with Python servers and how to program web services in Python.

The real heroes here are the development devs at Django. This framework is an interesting one that has taken off in popularity quite a bit. It also comes in a plethora of flavours. Another Django variant I am currently experimenting with is Django-oscar which I hope to discuss in the near future in another blog. The reason I chose this framework oppose to Flask is because it allowed me to easily set up multiple dashboards that would be served from one host URL. Managing assets and connections is also something that comes out of the box.  I needed a dashboard for each machine that would also use the same template, but to display machine data respective to the chosen machine. I did not want to manually define a page for each new machine connected to this project.

All the machine resources would be connected to this server – thus only one true connection to the machine had to be made. All other clients would then share this connection when viewing the machine in order to receive updates. Looking back on it now this could have been possible with the Flask library, but Django (and more so Django-plotly-dash) enabled this quite nicely, promoting the use of multiple apps within the project but also giving management tools that were built-in to the framework. Of course, this was a tremendous upgrade from the basic use of  Plotly-dashboards which was designed to serve a URL on the oriented around dashboard itself.

Again, I was just left with my dashboard, seemingly unchanged only now accessible via the web. At this point, I was able to navigate to my dashboard through some boiler plate looking web portal. My progress was looking even better though as now I had a concrete platform and I could start to implement bigger changes!

With the introduction of Django-plotly-dash, much of the project shifted gears. I was implementing a lot more packages that handled many different pieces of execution mentioned previously like; pyodbc, free-opcua, ftplib, and json. I also experienced use with the built-in Django manager (manage.py), and settings files.Both of which come with a lot of documentation to harness much of its flexibility. It was also the first time I started to take advantage of Python virtual environments (venv).

Virtual environments are self contained areas where one can use and install modules without effecting the current versions you have on your system. I like to imagine it as the .git of python packages. It was embarrassing I did not use this technique sooner as it allowed me to try out different versions of all the packages I had been using, and keep them better up to date with a requirements.txt. If you do not know about Virtual environments yet, I urge you to read more about them before continuing any Python project you have!

Templates were the biggest feature from Django-ploly-dash. It made it very easy to embed a dashboard into an html page. It was a little difficult to understand at first, partly because it works with such a lightweight syntax. With some supplementary reading, I was able to get a working version using this approach. I could even display multiple dashboard on one page which was a great success.

An early iteration of both dashboards on a single page

This type of view was made possible by creating my own Jinja2 template tag. It was necessary to create such a tag to handle the conversion of a dictionary to Json format. I named it jsonify, and was loaded on a page like so;

{% load jsonify %}

Since many of the dash components would be loaded with the ‘value’ property much like javascript, I used it to append the ‘value’ tag to whatever data I needed to get to the page from within the dictionary. By using;

new_data[header] = { ‘value’ :data.get(header) }

I could prepare any column data for the dashboard, even data read from a database.

So my incoming variables looked like this:

oee_value : { ‘value’ : ’99’ }

Which would be mapped to the component named “oee_value”, and automatically apply the default value 99 to whatever type of component it was.

I used this technique for several components on the page. But what about components that did not have a ‘value’ parameter by default? Some components were ‘text’ based or driven by some boolean indicator. I did not want to have to decipher which component this data belonged to. It was already working so well, and the code looked super simple.

Take for example this bootstrap component, which I used as a way to display the serial number in a nicely formatted, well coloured manner.

dbc.Badge(id=”NumberBadge”)

For these I would use some callback tricks by defining a hidden input field. Then, on the initial load of the page, I would use the callback to assign this value to the rightful display:

dcc.Input(id=’Number’, type=’hidden’, value=’filler’)

The values coming in from the template would be matched to the id and value of the hidden input component, however this would invoke its callback. Which is how I got the serial number into a neat looking bootstrap badge. Using these same techniques, I could query for the machine type, so that each machine in the list would have its appropriate picture displayed.

@app.callback(Output(‘NumberBadge’, ‘children’), [Input(‘Number’, ‘value’)])

def value_to_children(val):

return val

A look at a few visual upgrades

Django allows you to define your own tags which are like little functions sprinkled in the html code. If you are unfamiliar with this, imagine a syntax close to Razor (using the @ symbol to declare you are doing some server processing). There were a lot of helpful articles that helped me along the way.

So after settling in with django-plotly-dash, OPCUA was the next package to configure for production.

At the time, I had only been looking at past data, information written to files an hour after the event had past. In order for my dashboards to fulfill an operator or factory manager ‘s needs, I needed to show how the machine was performing in an instance. Production personnel would want to see data as it changes in real-time.

After a lot of testing, here is how I wanted it to work;

The machines were already broadcasting their information using an OPCUA server on board the machine, but nothing was receiving on the other end. It was my job to create a subscription client to listen for all the changes in data within each endpoint. My plan was to create one connection managed by the web-application. It is from here I would create an additional cache for the page that was temporary. Every so often, I would have an external ‘watchdog’ class make a redundant snapshot during the machine’s run-time.

Useful UAExpert properties

UAExpert was a very useful program in order to test connections and help discover node namespaces. It provides a GUI and allows you to better organize your connected systems. With the help of UAExpert, I could navigate to the nodes I needed namespaces for and call them directly. With code, it looked something like this;

from opcua import Client, Subscription, ua

client = Client(input_ip_address) #Address in the form of opc.tcp//IP:Port

Create a client object using the Client class provided.

sub_handler = SubHandler()

client.connect()

server_node = client.get_server_node() #optional

gen_subscription = client.create_subscription(1000, sub_handler)

this_node = client.get_node(“ns=3;s=::NODE_NAME”) #string notation of the node

gen_subscription.subscribe_data_change(this_node)

Create a SubHandler class which defines a datachange_notification method, then instantiate a subscription object using that handle. Create and return a subscription object using create_subscription. Use the client’s get_node method to make node objects, and then call the subscription’s subscribe_data_change method to attach this node to the handle.

Within this method, use any means to capture the data. I used an if statement block to determine the node names that were being changed, and then saved those variables to a dictionary with keys pertaining to that node name.

Note: OPCUA forewarns you not to use any slow or network operations as this could lead to a build up of events in the stream.

And so now I had real-time data entering the dashboard. Using the callback syntax I had used earlier, I could pick up changes in the data made to my dictionary, and use this as a sort of cache for the view.

The next step was to put a bit of meaning behind all this data, and give it some stylish looks.

Adam

by Adam Pucciano at Fri May 15 2020 14:31:51 GMT+0000 (Coordinated Universal Time)


Yoosuk Sim

Debugging with GCC: GIMPLE

GCC and GIMPLE

One of the very first thing GCC asks the GSoC applicants to do, even before writing the application, is to try various different debugging techniques using GCC. I was personally familiar with the basic, compile-with-g-flag-and-use-gdb method. Turns out, there's more: GIMPLE.

A Simple but Non-trivial Program

Problem Description

The instruction asks to compile a simple but non-trivial program with some flags that generates debugging information:
-O3 -S -fdump-tree-all -fdump-ipa-all -fdump-rtl-all
. Because I was keep reading on ways to debug GCC just prior to the statement, I immediately thought "GCC" and tried
make -j8 CXXFLAGS="-O3 -S -fdump-tree-all -fdump-ipa-all -fdump-rtl-all"
. This was a mistake: turns out, GCC can't be compiled with those flags. Thankfully, GCC developers have a very active IRC channel for me to signal SOS.

Resolution

jakub
and
segher
were quick to respond to my call for help.

jakub: it isn't meant that you should build gcc with those flags, you should pick some source and compile that with the newly built gcc with those flags
jakub: and look at the files it produces
jakub: the dumps show you what gcc has been doing in each pass, so e.g. when one is looking at a wrong-code bug, one can look at which dump shows the bad code first and debug the corresponding pass
jakub: another common method (at least recently) is, if looking e.g. at a wrong-code regression where some older gcc version worked fine and current doesn't
jakub: to bisect which gcc commit changed the behavior and from diffing the dumps with gcc immediately before that change and after find out what changed and try to understand why
segher:where "recently" is ten or so years :-)
segher:(but diffing dump files isn't great still)
So, the above flags provide even more depth of understanding on what is happenning from the compiler's perspective. Digging around in GCC Developer Options documentation and the gcc
man
output, I found what some of the flags were for:
  • -S
    : Stop after the stage of compilation proper; do not assemble
  • -fdump-tree-all
    : Control the dumping at various stages of processing the intermediate language tree to a file. In this case, all stages.
  • -fdump-ipa-all
    : Control the dumping at various stages of inter-procedural analysis language tree to a file. In this case, all inter-procedural analysis dump.
  • -fdump-rtl-all
    : Says to make debugging dumps during compilation, too big a list to repeat it here
This adds a whole new depth of information I hadn't imagined before. I dusted out my old assignments from OpenMP class and decided to give it a spin.

Dusting out my old assignments

The assignment was a simple affair, comparing efficiencies of various different features of OpenMP addressing one particular problem: reduction. I decided to take a look particularly at worksharing example: it had some single-thread operations as well as several different OpenMP operations, and I hoped that would give me a glimps at various different formation of output. Since my ultimate goal would be to work on OMPD, brushing up on OpenMP context seemed logical. My source code was all in the solitary
w4.work-sharing.cpp
file, so I issued the compilation command:
g++ -std=c++17 -fopenmp -O3 -S -fdump-tree-all -fdump-ipa-all -fdump-rtl-all w4.work-sharing.cpp
. No errors. Instead, I ended up with 225 dump files, written in an intermediate code language called GIMPLE.

GIMPLE

GIMPLE, as defined in GIMPLE documentation, is "a three-address representation derived from GENERIC by breaking down GENERIC expressions into tuples of no more than 3 operands (with some exceptions like function calls)." My shallow understanding and assumption is that GIMPLE is language and architecture independent, which sounds similar to Java bytecode idea, although the latter is probably very dependent to JVM as the target architech. It is also here where many of the optimization takes place. Since GIMPLE is intermediary code to all languages supported by GCC, and to all architecture, this means same optimization done on GIMPLE would affect all languages and architectures.

Files of interests

I could not dare read all 255 files. Perhaps some day, but it would have overwhelmed me. Besides, it seems like each file is evolution from another, making them look very similar to each other with specific tweaks applied on each step. That said, I was immediately attracted to
.gimple
, which seems to be the point where the code was translated to GIMPLE, as well as
.omplower
that has lower level GIMPLE methods specific to OpenMP, and
.ompexp
and
.ompexpssa2
, each with different optimization. More studying to do.

Going forward

I am going to learn more about GIMPLE, and try to understand the OpenMP portion of it more in-depth. I should also start reading up on OMPD documentation to find some correlation to link the two projects together. This is exciting for me, and I can't wait to take the next step.

by Yoosuk Sim at Fri May 15 2020 00:30:34 GMT+0000 (Coordinated Universal Time)

Monday, May 11, 2020


Adam Pucciano

Python Series: First Approach

This is a continuation from my Python series. Check out the introduction and Part 1 where I explain all the tools used for this project here.

Part 2: First Approach

My first approach at Python started with Pandas. Reading as much as I could, and looking at the useful ‘cookbooks‘ the community had posted was a tremendous help. It also helped a lot getting sidetracked in my spare time with Pythons plethora of very easy to use image classification packages. When things are easy to use, its makes them super fun. Python for me was starting to become just that.

I struggled a lot at first – not so much getting used to the syntax, but with the confidence that the code I would write would run properly, it was a different script than what I was used to writing. It was a very short program, but a lot of the heavy lifting was done by Pandas. More and more I kind of fell in love.

I started with the CSV files that my original C# code would generate. Pandas had a nifty method that would create Dataframes from these types of files. Data frames are where you want to start your data processing. These objects have very useful behaviors that allow the programmer to manipulate data with ease. Organized in a matrix, they can be queried, merged, and filtered like SQL,  but can also be iterated through like a list meaning that any mathematical operations on the data set become much easier to perform. This also cuts out much of the code base that I needed to maintain and allowed my to focus my efforts on coding features that used the data in creative ways. I started to realize why Python was branded as such a great tool for data scientists.

An example of this can be seen in the report outlining the triggered stop alarms. Usually, this report would show when alarm events had halted production that was indexed with a timestamp. It would indicate which alarm code was triggered and for how long this alarm stayed present until it was cleared by an operator. Using Matrices, I not only could display the alarms on a time scale, but I could aggregate the data and figure out which alarms were the triggered the most. Furthermore, I could continue to group the data by day, index the result based on type of alarm that was triggered, and sum the total duration for that particular event. An analyst could now clearly see for which days the alarms were triggered the most, not just an overall average.

An early version of the alarm report system, graphed on the same timeline as efficiency data
Using the built-in interactive nature of the graphs allows users to specify the data they need.

The biggest advantage is that I could put my trust in the integrity of this library, which eliminated a need for me to validate results. Since this was a one man operation, I wanted to find more libraries that I could leverage like this.

This is exactly what I had found with plotly, and dash. Both of these libraries worked hand in hand as they were developed by the same group of individuals, but to no surprise plotly also worked out of the box with Dataframes. This again proved to be a critical moment for productivity as I could continue to put my trust into these libraries to get what I needed in development. I urge any programmer to do the same, especially in an experimental phase. Do not be hesitant to try new libraries!

So began my new plotly-dash program which took in my already processed files and created interactive graphs which could prove useful to a user. An issue continued to remain however: the whole process still felt very segregated. The files would have to be processed first (in C#) and then used in the Python script. This created a whole bunch of problems and loopholes that was not very cohesive, and turned into a very big mess when I tried to fill the gaps.

In about a day, I managed to rewrite my whole C# application that I made previously as a single Python script which I simply called ‘binReader.py’. With the help of numpy – a common Python library, I could define a datatype that exactly matches my schema and tell a file reading method how to traverse each of the various binary files by describing their chunk size. The process looked something like this:

import numpy as np

import pandas as pd

Define a structured block;

exampleType = np.dtype({

‘names’: effi_names,

‘offsets’:[1, 25, 29, 33],

‘formats’:[‘a20, ‘f4’, ‘i4’, ‘u4’],

‘itemsize’: 55

})

Read Files given my datatypes with numpy’s fromfile method;

df_toReturn = pd.DataFrame(np.fromfile(os.getcwd() + filename, exampleType))

That’s all it took to do it in Python! It really helped having the C# version after all, as I could run both and I could verify the results using my functional Windows program. I guess that was some advantage to have first programmed the logic in a more familiar language. I knew when I had an incorrect offset, or an itemsize made a misalignment in the file just by looking at the outputs. Using the fromfile method, the rest of the procedure was the same, and I was able to add all the datatypes. Excluding some logic to decode the byte results to utf-8, the amount of actual coding would exactly follow this procedure.

It became a utility class, a script that no longer needed to be tied to the Windows platform. It would take in binary files, and spit out a data frame directly, or CSV files depending on the context. Generously around 300 lines of code (with comments), the new binary file reader felt more portable and better suited for the job. I felt better prepared for future additions even if the end result felt a bit more ‘hacked’ together as much of the logic in this script was taken from parts I’ve read and followed in the python examples section. During the development phase it proved to be a reliable script and more maintainable than its C# counterpart.

As a result, my initial C# program became obsolete, but not without learning a few lessons. For one, I learned how to read binary data into a struct using C#, but more importantly I learned not to be afraid of learning and producing software at the same time. To break out of my comfort zone, and use the best tool for the job, instead of fixating on using unfit tools for a new project. Not to say I am any expert in C# either, perhaps one day I will have to revisit this .

Still rough around the edges

While not much had changed from the user’s perspective, I knew that this substitution was a surmountable upgrade. I could now directly take binary files from machines, and turn them into figures which the user could interact with and possibly extrapolate meaning. For instance, one could compare down times with alarm or mold change events. There was no need for the middle ware to do any conversions or pre-processing, and at this point exporting the files to a CSV format was left as an option rather than the defining feature. As a bonus, this eliminated the need for programming logic that would handle saving or overwriting the converted files and their respective directories.

This program was still a little rough around the edges, like how it required a repository of folders and files to look at while running along with a few other things that made it rugged. I started to gather all the short comings of the new program and added in what I could for quality of life or visuals. I began planning my internal improvements. I needed to continue my direction in automating the experience of ‘looking at machine data’. I know the exterior ‘flare’ would have to come after, but at least for now I had a pretty flexible and reliable core to work with. I would go on to prototype several other libraries on their own while I did some usability testing with my newest Python interface.

 

Adam

by Adam Pucciano at Mon May 11 2020 17:54:11 GMT+0000 (Coordinated Universal Time)

Thursday, May 7, 2020


Adam Pucciano

Creating real value with real-time dashboards in Python

I know I do not post enough programming content, so here’s the first of many upcoming entries about my experience with Python.

NIIGON is an injection molding manufacturing company, the place where I work professionally and dedicate my time. They create massive industrial grade machinery for plastics manufacturing. My responsibilities there are mostly development IT, which is amazing because it means I get to work on various types of programming projects: web portals, windows applications, open-software integration and building automation, NIIGON does it all from the ground up with a small team of on-site IT. It feels much like the freedom of a start-up, with the strong foundation of a Fortune 500 company. It’s actually a fantastic place to spend my time, and I have learned a lot.

Here is my chance to share a bit of what I do professionally with other programmers, and give some special ‘shutouts’ to all the frameworks and libraries I’ve been using.

Over the past couple of months I have been developing a portable dashboard for production machines that are in service. Using a communication connection to the machine, it simply pulls data from active nodes and displays some basic (but important) information on the screen. Cycle time, parts per hour, and efficiency are what I learned to be some of the main metrics for measuring machine effectiveness and health. This dashboard also displays the currently assigned job, job progress and any alarms that may stop the machine’s automatic cycling mode.

Without explaining further, I will let these screen-captures better describe what I have developed so far.

A live dashboard view of a machine running in production
An analysis view of a Machine’s OEE statistics

 

What I came up with is fairly standard stuff, but I would ultimately like to share my journey in creating this feature using Python; hopefully if someone out may be looking to make a similar project, they can find themselves here for assistance.

I anticipate this will be a long post; so I am deciding to dissect this write-up into a few partitions. In this series, I will give you all of the tools you require, how it all works together, and what’s in store for the future while also making some comments on my experience creating and prototyping this type of technology and some of  the hardships and learning curves that I had accumulated throughout this project.

Part 1: The pieces of the puzzle.

Major frameworks/libraries involved in this project:

  • Django-plotly-dash – for containing the web applications forked to work with dash
  • Free OPCUA – for communicating with OPCUA servers running on the equipment
  • Plotly-dash – for displaying dashboards and creating graphs or figures
  • pandas – handle data locally in an easy way with Data frames
  • json – of course, to move around objects or information in HTTP requests
  • pyodbc– for connections to SQL database

Notable contributes:

  • ftplib – standard library for handling ftp connections
  • numpy – can’t have pandas without a little bit of numpy
  • Jinja2 – template with tag syntax for easy HTML creation

This project all started with machine data. NIIGON (formerly Athena Automation) had been ahead of the curve, having a part of their system for monitoring its’ sensors and activity, collecting this data for years. An on board OPCUA server  was embedded on each machine, and when coupled with additional custom software, its function was to write bits of snapshot information to tiny binary files that were also stored on the operating system.

There was only one inconvenience, and it proved to be much more intracate than I first planned for. Having to get the information off the machines and into a usable format was of course necessary for all this to work. This would be key in order to make real value out of what had been recorded. Much of what today’s technologies is, revolves around data analytics and collecting Big data.

We (NIIGON) had much of the collecting (and recording) already finished. As mentioned,  the machine’s themselves would gather operations data that occurred in the field. At first I decided to make an application in C#, to help translate these files. It was a language that was more familiar to me after working with windows forms a lot, and it did the job quite well. Machine files could be read into CSV format, and further extrapolated in Excel using a little GUI program loaded with check box options and buttons. However this approach felt closed ended. The files would be in another format, and that would be the end of it. It was hardly a flashy app I had envisioned to show my children one day. But more importantly, I had to ask myself if this application fit the requirements and goals I set out to do, and would this application ultimately be usable and provide some real value.

My answer to this reflection was a quick “No”. In its current form, this application would not be adopted by another department to use for machine analytics. I thought about being the user for this GUI quite a lot and during testing it was hard to let go of my preconceived knowledge of how it works. While it worked good enough and did all the right things, the check options and functionality for file types seemed to be a bit more technical than I had anticipated for someone on a sales force team. Still pretty good for a first prototype though, but the focus on how the app was used was misguided. I wanted to make the experience even more effortless so that this app was enjoyable to use, and did much of the work for the user, instead of being dependent different programs to draw graphs or get information from the row data. I had to change my perspective on what the application provided to the user as a tool, and how it was actually used as a program. I concluded that all of the tool like features should just be automated – as part of what the application does or sets up as an environment for the user to work with. How one interacts with the application was starting to evolve. Instead of the user having to concern themselves with working on organizing the data, they should be working with the data to create meaning for it.

I wanted to automate this approach even further. All of my research began to point to Python. Namely, hits like Pandas, and numpy were the first I came accross which were libraries (packages) aimed at data scientists who needed a high level programming language to process a large amount of data. And that’s when my dive into Python truly began.

Thanks for reading – More to come!

Adam

 

 

by Adam Pucciano at Thu May 07 2020 18:11:33 GMT+0000 (Coordinated Universal Time)


Corey James

GraphQL RealWorld API – TypeScript, JWT, MongoDB, Express and Node.js

Hello, Welcome to my blog! Following up on my most recent post “RealWorld API – TypeScript, JWT, MongoDB, Express and Node.js”. I have modified the REST API I made into a Graph QL API. What is GraphQL? GraphQL is a query language for an API. GraphQL allows front-end applications to have control to specify the …

Continue reading "GraphQL RealWorld API – TypeScript, JWT, MongoDB, Express and Node.js"

by Corey James at Thu May 07 2020 00:55:17 GMT+0000 (Coordinated Universal Time)

Monday, May 18, 2020


Yoosuk Sim

Setting up IRC

About IRC

Internet Relay Chat was once the most popular real-time communication method that uses computer for a large spectrum of community groups. While it has largely been supplanted by more contemporarly methods for most people, IRC is still a prefered means of communication among older software projects like GCC.

The Client

As an old technology that is also very favored among programmers, IRC has many clients with different flavors, from GUI to CLI to headless. As a linux user with strong attraction to tmux, I chose weechat. Depending on your distro or OS, install your client first.

Configuring Weechat

I will be using GCC channel in OFTC server as the example.

How are we configuring this?

While Weechat has a configuration file, Weechat officially advises against using the file to configure the program behavior. Instead, it promotes `/set` dialog within the client to set the proper configuration.

Connect to server

Let's first connect to server. The `#gcc` channel is hosted at the OFTC server (irc.oftc.net). Let's connect to it with non-SSL connection first: `/connect irc.oftc.net`.

Set nick

Once connected, our goal is to setup and own a nick name. Likely, weechat is already using your login name, but if you desire a different name, or if the current name is already taken, you would need to issue `/nick NAME_HERE`. Change `NAME_HERE` with an appropriate nickname.

Register nick

Once an appropriate, free nick is chosen, let's register it so that it uniquely identifies the user. The server has a service named NickServ. Its primary job is to, as the name suggests, service nicknames. Users interact with NickServ by sending messages to it. To register our nick, send the following: `/msg NickServ REGISTER YOUR_PASSWORD_HERE YOUR@EMAIL.HERE`, replacing the password and email as appropriate. Depending on servers, there may be extra steps involved. For OFTC, I had to log into OFTC web interface, send out verification email, and verify via emailed link.

Register SSL to the nick

Adopted from OFTC site.
  • Generate cer and key file:`openssl req -nodes -newkey rsa:2048 -keyout nick.key -x509 -days 3650 -out nick.cer`
  • Generate pem file: `cat nick.cer nick.key > nick.pem`
  • Set permissions: `chmod 400 nick.pem nick.key`
  • Copy the files: `mkdir -p ~/.weechat/certs && mv nick.* ~/.weechat/certs`
  • Within Weechat, add server: `/server add OFTC irc.oftc.net/6697 -ssl -ssl_verify -autoconnect`
  • Within Weechat, add certs: `/set irc.server.OFTC.ssl_cert %h/certs/nick.pem`
  • Quit and restart Weechat
  • connect to server: `/connect OFTC`
  • Identify yourself: `/msg NickServ IDENTIFY YOUR_PASSWORD_HERE`
  • Associate Nick to Cert: `/msg nickserv cert add`
  • close everything and reconnect to server to verify connection and Nick authentication

Other nitty-tidy settings

turn on autoconnect, and give default channel to connect to for autojoin.

Other things to consider

I need to get highlights to work properly so that if anyone talks with my id, it is easy to spot when I return to the chat. I also am interested in running this headless on another server/VM. Also, the script for notify seems an interesting feature. This blog post and another post seem to provide some interesting options for scripts. This gitgist also provides a wealth of information.

by Yoosuk Sim at Mon May 18 2020 22:31:47 GMT+0000 (Coordinated Universal Time)

Wednesday, May 6, 2020


Steven Le

Angular Bootstrap Project Developments Part 6

Hello, Hello again and welcome back to the (supposed) last installment!

Today we’re going to be going through a short topic: Having a Google maps embed on a component of the website.

During this process of researching and implementation it was pretty easy. It doesn’t follow the conventional Google Maps API method but instead uses a third party in order to accomplish mapping. The main trouble I found was the styling that came from it which was overcome with iframe documentation. Let’s start the process.

Implementing a Google Maps Embed

The first thing I did was to get an embedded google maps link from this third party website. Insert the location of choice and set the width and height to your desired size. When you get the HTML Code it looks like a lot, it isn’t really as you’re going to be using only one part of it, the src. Here’s what it looks like:

Repeating what I said before we’re only really using this part of this long html code:

src="https://maps.google.com/maps?q=toronto&t=&z=13&ie=UTF8&iwloc=&output=embed" frameborder="0" scrolling="no" marginheight="0" marginwidth="0"

Like the picture and the code above though, we’re also going to be using the iframe tag, I just cannot add it here as WordPress has trouble displaying HTML code.

Go through the process of creating a component as you did in the older parts. As a refresher just call:

 ng generate component insert-component-name-here

In my project I made my component a contact-info page. Now after that add it to you’re app-routing.module.ts by importing the component and route. It should look something like this:

import { NgModule } from '@angular/core';

import { Routes, RouterModule } from '@angular/router';

import { HomeComponent } from './home/home.component';

import { ProjectsComponent } from './projects/projects.component';

import { ContactInfoComponent} from './contact-info/contact-info.component';

import { NotfoundComponent } from './notfound/notfound.component';

const routes: Routes = [

  {path:  "", pathMatch:  "full",redirectTo:  "home"},

  {path: "home", component: HomeComponent},

  {path: "projects", component: ProjectsComponent},

  {path: "contact-info", component: ContactInfoComponent},

  {path: "404", component: NotfoundComponent},

  {path: "**", redirectTo: '404'}

];

@NgModule({

  imports: [

    RouterModule.forRoot(routes, {useHash: true})

  ],

  exports: [RouterModule]

})

export class AppRoutingModule { }

Code link here.

Now onto the html code for this embed, insert the usual !DOCTYPE html, html and body tags to be used:

And like older parts I’m going to add divs for the website’s styling and copying over the same css.

In order for me to get this embed to fit properly and scale when the screen size changes we’re going to be using a bit of bootstrap. Specifically we’re going to be using their grid system, so under the website-background div, we’re going to be inserting 3 divs: div with class container, div with class row and div with class col-md-12. Each of these encapsulates the grid layout: container is the box we’re going to be working with, row is the row in the capsule we’re working with and col-md-12 is the size of the row which we’re working with. Bootstrap grids have a max row length of 12 so in this case it’s the entire row we’re putting this embed in.

Within these divs, I created a div class to handle the css and inserted an iframe tag with the source from above.

All in all it should look something like this:

Code Link here.

I opted to add another row with my information to be displayed under it but that’s optional.

Unlike the other components, this one has specific spacing on the container so I’ll add it here as well.

for the iframe styling and

for the background + header/footer padding.

Code link here.

Now that was simple, you have a Google map embed on your website. Thanks for reading!

Now that I’m finished what I had worked on for a little bit, I may come back to this project and improve aspects of it, maybe add more features but as it is, it is complete. If there are any changes I will add them as another part of this series, Thanks again!

Click here to go to the last part

Click here to go to the beginning of the project

by Steven Le at Wed May 06 2020 18:49:48 GMT+0000 (Coordinated Universal Time)


Yoosuk Sim

Act 3 Scene 1

Wait, what?

The last post ended with completing Act1 Scene1, with hints to Act1 Scene2. Yeah, a lot happened since.

Goblin Camp

Some of the last work in the blog was about the Goblin Camp, a revival project of an abandoned source code, which in turn was inspired by a great game, Dwarf Fortress. Since then, I learned more about data structures, and object design patterns. With each enlightenment in programming, I was more and more aware why this code was abandoned multiple times by different groups. I became one of them. This doesn't mean my goal of marrying parallel programming with the great game concept is abandoned: ever since the last communication, I also took a course on GPU programming, and it is giving me new ideas and goals. It just would not be happening with the existing Goblin Camp. More on this in the future. And this concluded my Act 1.

How about Act2

My Act 2 began with my Coop placement at Fundserv. It was truly a learning experience, in the best sense of the word. I feel spoiled with experiences I only hope I may continue to experience as I continue my journey as a software developer. I was extremely lucky to have entered the company when it was going through a massive modernization process. New infrastructures were being set in place that allowed for a mature work-from-home environment; this played a pivotal role in the company's continued success during COVID-19 crisis. Not only that, I was placed with a team in charge of spearheading the standardization of the new software development methodology including creating a new CI/CD pipeline and splitting monolithic code base into multiple micro services, to name a few. My job was to learn, apply, and document, which gave me a wealth of hands-on interaction with multiple products that would later escalate to production. My code, in production. It also meant I would create knowledge transfer documents and prepare KT sessions for other developers to introduce the new methodology, although regrettably, due to COVID-19, the KT session was postponed past my contract period. Still, I gained knowledge enough to stand before other programmers to share it. I very much felt that I was part of a development community. I was growing up as a programmer. The completion of my two successful Coop semesters at Fundserv also meant the end of my Act 2.

Act 3 Scene 1: Google Summer of Code

Just as my Act 2 completed, I was fortunate enough to get accepted by Google Summer of Code 2020. I will be working with GCC to begin the implementation of OMPD. This would allow GDB to debug OpenMP code in a more sensible manner that better reflects the OpenMP standards. I am very excited to be working with C/C++ codes, and I look forward to writing more about it as I progress through the project.

by Yoosuk Sim at Wed May 06 2020 13:00:59 GMT+0000 (Coordinated Universal Time)

Friday, May 8, 2020


Calvin Ho

Typescript + Linters

Taking a small break from Telescope until the summer semester resumes. I've started collaborating with a elementary school friend on a project to build a clone of the game Adventure Capitalist. After working with Javascript for so long, I decided to try doing this in Typescript. It went pretty well up until I had the following line of code:

const index = this.shops.findIndex((shop: Shop) => shop.name == shopName);

When I was trying to compile my code, I kept getting the following error

Property 'findIndex' does not exist on type 'Shop[]' 

Pretty sure this should work as shops is an array of type Shop. As a developer usually does when they run into issues, I started googling the problem and checking Stack Overflow. It recommended I change my tsconfig.json "target" to es2015 and the findIndex() is an es6 function and add es6 to "lib". I did all that and tried compiling, still no good. I reached out to my frequent collaborator from Telescope @manekenpix and he suggested I just try running the code. It works?

Turns out it was a linter issue, although it still compiled properly. Upon further research 2 hours later, I realized I was using the cli command wrong, or at least the way I was using it was going to cause errors. I was compiling my .ts to .js by using the command tsc index.ts instead of tsc, when a specific file name is used, it will disregard the tsconfig.json file settings and just try to compile your Typescript to Javascript. So I tried running 'tsc', it worked! No errors and it was outputting all the compiled .js files inside my /build folder (ignored in .gitignore) I specified in my tsconfig.json file.

by Calvin Ho at Fri May 08 2020 07:05:49 GMT+0000 (Coordinated Universal Time)

Tuesday, May 5, 2020


Steven Le

Angular Bootstrap Project Developments Part 5

Hello, Hello again and welcome back to the next installment!

Today we’re going to be going about reading local JSON files and using the items in the files within the website. This process is pretty quick but like the previous matter with routing and with HashLocationStrategy there was not an easy to understand/easy to read guide about it. Most of the guides were using other packages/libraries in order to handle it (which is good for a multitude of reasons). But if you are like me,  you just want a low overhead quick functioning website that doesn’t have too many hard-coded areas and exhibited at least some basic modularity.

The process is fairly simple but let’s start out with looking at our file.

[{

    "ID": "001",

    "name": "Desktop Environmental Setup Manager",

    "link": "https://github.com/Dragomegak/DESM-Personal",

    "description": "This is a Microsoft Windows Program application and uses a QT Frontend with C++ and Win32API backend to create a virtual desktop based on a profile system.",

    "workDescription": "The profiles are made ahead of time and have a list of programs to run. The profiles can be made for different scenarios that require different programs to be launched.",

    "technologiesUsed": "C++, QT Framework, Win32 API"

},

{

    "ID": "002",

    "name": "Alien Attack",

    "link": "https://github.com/Dragomegak/Alien-Attack",

    "description": "An IOS Game where you tap aliens coming down from the top of your screen!",

    "workDescription": "This game used a multitude of IOS/Swift unique features like delegation for physics, gamescene/gameview for specific tools used to make games and easy local database storage.",

    "technologiesUsed": "Swift 4.2, xCode IDE"

},

{

    "ID": "003",

    "name": "Nodejs Filer",

    "link": "https://github.com/Dragomegak/filer",

    "description": "Node-like file system for browsers - https://filerjs.github.io/filer",

    "workDescription": "I contributed a test for a npm based project Filer which is a drop-in replacement for node's fs module, a POSIX-like file system for browsers.",

    "technologiesUsed": "Nodejs, Javascript"

},

{

    "ID": "004",

    "name": "Android Jetpack",

    "link": "https://github.com/Dragomegak/Android-Jetpack",

    "description": "Documentation created to speed up development and become a be a reference for Android Development.",

    "workDescription": "A project I contributed to in order to understand more about android development as well as continue to learn and make meaningful and useful documentation for others to use.",

    "technologiesUsed": "Markdown, Kotlin"

},

{

    "ID": "005",

    "name": "Zopfli",

    "link": "https://github.com/Dragomegak/zopfli",

    "description": "Zopfli Compression Algorithm is a compression library programmed in C to perform very good, but slow, deflate or zlib compression.",

    "workDescription": "Referencing: https://stevenleopensourceblog.wordpress.com/2018/11/06/spo600-saa-project/. A project I analyzed the runtime and and tried my hand at seeing whether I could further optimize using methods like optimization flags, unique CPU performance characteristics, auto-vectorization etc.",

    "technologiesUsed": "Assembly Language, x86/x64 Architecture, ARMv8a/AArch64 Architecture"

}

]

This is the file we are going to be using, the location we should store this would be in the asset folder where we stored the images for our carousel. I opted to put it in a distinguished folder as well. The path should look something like /project_name/src/app/assets/projectData/filename.json.

The next step we have to do here is to make it so that the web application itself can see the JSON file. We’re going to be doing 2 things: creating a json-typings.d.ts file in the src directory and importing the file itself to the component we are using it in.

Let’s start with creating the json-typings.d.ts file:

declare module "*.json" {

    const value: any;

    export default value;

}

This is the file contents, nothing special about it.

Let’s move on to the component we want to vend the contents to. In my case it will be projects.component.ts. First thing we’re going to add is an import statement at the top importing the JSON file from what we created above at it’s location:

import projects from '../../assets/projectData/projects.json';

After that we’re going into the inherited class and allowing the values to be read, you want to add all the keys that we’ll be using as a JSON file works with key, value pairs:

export class ProjectsComponent implements OnInit {

  constructor() { }

  public projectList:{

    name:string,

    link:string,

    description:string,

    workDescription:string,

    technologiesUsed:string,

  }[] = projects;

  ngOnInit(): void {

  }

}

It should look something like the above. Here is the file.

Now that we have fully initialized the JSON file, we can now use it! My implementation is fairly shallow with the shallow goal of just listing the contents within a table to be displayed.

Navigate to you’re component’s html file and do the normal initializations: Add !Doctype html, html and body tags. Afterward initialize a table,  I added a header to the table to show what value was what in the first row. Now that the first row is there, lets add the second row onward that takes the data from my JSON file. I’ll be adding an image of how the html code looks like as it is difficult to add html to WordPress’ blog posts, also here is the file.

We’re going to be using the ngFor loop built in, in order to vend the data. It is important that you call the value we initialized in the projects.component.ts file. *ngFor=”let item of projectList” is exactly like a for loop like this: for item in projectList. The name item is interchangeable and it doesn’t necessarily need to be named that way. You can also nest for loops as well but since we are only vending it shallowly we aren’t going to be going into it.

Now compile and run the instance of the web app using ng serve and it should be all up and running!  You can even deploy it using the guide from the last part!

Thanks for reading this post, the next part (may) be the last part I will be doing for this project and will be addressing using a google maps embed on a contact info component. See you next time!

Click here to go to the next part

Click here to go to the last part

Click here to go to the beginning of the project

 

by Steven Le at Tue May 05 2020 23:06:14 GMT+0000 (Coordinated Universal Time)

Monday, May 4, 2020


Ray Gervais

Following The Tomato Timer

My experience with Pomodoro, 9-5

I've always had terrible luck focusing when not in the office, I believe the cause rooting from the environment itself implying that "work gets done" here vs at home. With that being said, it's easy to imagine the past eight weeks that I've been "attempting" to work from home have been quite difficult. After acknowledging that we may be in this for the long haul, I knew that I'd have to find a better coping / focus strategy; one more rigid-yet-balanced, one which screams "productivity" and forces such. Essentially, I wanted a focus system which prioritized focused work in a single domain vs micro managing various domains between my work and personal tasks. It dawned on me that every phone I've ever used always had the same set of applications installed, including one which I used to leverage often in High School: a Pomodoro App.

I thought, why not utilize the neglected concept? What's the worst thing that could happen?

For those who haven't heard of the Pomodoro Technique before, Wikipedia describes it as:

There are six steps in the original technique:

  • Decide on the task to be done.
  • Set the pomodoro timer (traditionally to 25 minutes).[2]
  • Work on the task.
  • End work when the timer rings and put a checkmark on a piece of paper.[6]
  • If you have fewer than four checkmarks, take a short break (3–5 minutes), then go to step 2.
  • After four pomodoros, take a longer break (15–30 minutes), reset your checkmark count to zero, then go to step 1.

Work for 25 Minutes

Not so hard, right? In the morning while sipping Coffee, I find the first two hours are very meeting / planning heavy; so getting through 25 minutes is often easy. Where I truly wanted to establish the difference between my distracted "norm" and this technique is avoiding all social-based distractions where appropriate such as Twitter, Reddit, Facebook, and so on. Those notifications and feeds I'll check during the 5 minute break. More on that in the next section.

My current tasks often follow a stop-start pattern, with 5 or minutes of waiting (often much more) due to builds, approvals, and testing. In consequence, this forced me to consider how would I maintain a solid 25 minutes of focused work without losing velocity. This meant planning exactly how I'd execute tasks in a fashion so that they could be done in parallel. Essentially, intentional multitasking.

Break for 5 Minutes

I found after doing this for a week (pretty consistently, if I may add), the setting aside of these social platforms and forums meant that in small break I had to prioritize what I cared to look into. When time is limited, I opted to check Reddit, Twitter, and Hacker News more than I would glance at Facebook, Instagram, or YouTube even. I'm intrigued if I'll follow the same pattern for week two.

With the bigger breaks, this is where I'd devote time to accomplishing my own tasks such as a short workout / walk, blogging, gaming, reading, etc. Though it's still hard to transition and stay focused on a single lane when changing lanes on the "task highway", I definitely felt I had accomplished more during the work week in comparison.

Apps Galore

One of the best attributes about this technique and it's digital following? Pomodoro applications roam freely in various offerings, written in Vala, Node and even integrate with Chrome. That's not counting the dozens on iOS and hundreds on Android! I say it doesn't hurt at all to try, and I'm recommending all who have the capacity to attempt a week following the timer and evaluate from there.

Going Forward

I want to try utilizing Pomodoro throughout all of May, and compare my mood, productivity, drive and results to April. I believe that during these panic-centric times, keeping tabs on yourself is essential -along with questioning and experimenting where possible. I've come to learn that despite my advocation for example of being spontaneous and never fitting the mould where you could instead find the niche which fits you, I am still an individual that thrives on routine and structure. These experiments allow me to learn such, and also enable further questions such as what structures work best, and what can be replaced for different workflows and perspectives.

Resources

by Ray Gervais at Mon May 04 2020 00:00:00 GMT+0000 (Coordinated Universal Time)

Sunday, May 3, 2020


Steven Le

Angular Bootstrap Project Developments Part 4

Welcome back to another part of the progress of the project!

Today’s topic will be a bit short but it definitely did not have enough in the way easily accessible and easy to understand guides. The topic: Deployment and Routing. Let’s not waste time.

Deployment

In my project I was fixated on using GitHub pages as the host for my project. In the first part of these write ups there was a very well written and easily follow-able guide to tie a git repository to GitHub. We will be using that repository and all the cumulative changes we have been doing for the past 3 parts thus far.

Here is the official page of GitHub Pages for you to take a look.

In all honesty you can get away with not using the specific naming of the project. It’s what I did and it did affect it but not to a high degree.

Here’s how we set it up before we deploy it:

  1. First go to the repository connected to the project
  2. Go to your settings
  3. Go to the GitHub Pages section
  4. Under source, change it to the master branch (temporary step)

That will be the first part of the deployment process. GitHub will now be uploading it and hosting it on their servers, this process can take anywhere from 10-20 minutes. The service will be searching through the master branch to display. Because we have no deployment code it should immediately only display the README.md file.

Now that we have set up the basis on GitHub’s end its now the time to go into the details of Angular Web app Deployment.

Open your IDE and set up the terminal such that you are in the directory where all the work is in. In my case, I opened my project into Visual Studio Code, opened the terminal and changed the directory to the project (cd mainWebsite).

Do last checks to the website before we create a deployment, it’s a good practice to double check.

Now in terminal call this command:

ng build --prod --output-path docs --base-href /<project_name>/

For me it would be:

ng build --prod --output-path docs --base-href mainWebsite

This command corresponds to how GitHub does deployment, remember to check Angular documentation for how to deploy to other services.

This process should take a little time and should hang at the start, at 10% building and when it generates ES5 bundles. (at least from my experience).

Push your changes to the GitHub repository and go back to the repository settings. This time we change the source to master branch /docs folder. This should now only look at the docs folder that we created when we deployed the websites output.

Remember that it should take some time for GitHub to push the changes to their servers so take a coffee break, maybe pat yourself in the back for some work well done.

Now that you’re able to see the website in GitHub Pages, congratulations you have deployed an output of your website. There’s just one thing wrong, if you refresh the deployed web application, it just breaks (or goes straight to Github’s 404 page). What shall we do now?

Fixing Deployment Routing

So far we’ve been using angular’s onboard routing PathLocationStrategy in order to vend updates to web pages within our web application. From my research this is good for development but I just could not get it working within my deployment and defaulted back into HashLocationStrategy routing.

Unlike the other processes this should be a quicker section for you guys. Like the deployment section, searching this up and trying to find a straightforward yet easily understandable guide was difficult. Hopefully I can articulate the small amount of work that needs to be on, Let’s get on with it.

Under app-routing.module.ts, under the @NgModule section and in imports replace

@NgModule({

  imports: [

    RouterModule.forRoot(routes)

  ],

  exports: [RouterModule]

})

export class AppRoutingModule { }
with
@NgModule({

  imports: [

    RouterModule.forRoot(routes, {useHash: true})

  ],

  exports: [RouterModule]

})

export class AppRoutingModule { }

Under app.module.ts, we’re going to be adding 2 sections to this file: add

import { HashLocationStrategy, LocationStrategy } from '@angular/common';

with all the other imports on the top of the file and replace

providers: [{provide: LocationStrategy, useClass: HashLocationStrategy}],

in the providers section under @NgModule. as a whole it should look something like this

import { BrowserModule } from '@angular/platform-browser';

import { NgModule } from '@angular/core';

import { AppRoutingModule } from './app-routing.module';

import { AppComponent } from './app.component';

import { HomeComponent } from './home/home.component';

import { HeaderComponent } from './header/header.component';

import { FooterComponent } from './footer/footer.component';

import { ProjectsComponent } from './projects/projects.component';

import { ContactInfoComponent } from './contact-info/contact-info.component';

import { HashLocationStrategy, LocationStrategy } from '@angular/common';

import { NotfoundComponent } from './notfound/notfound.component';

@NgModule({

  declarations: [

    AppComponent,

    HomeComponent,

    HeaderComponent,

    FooterComponent,

    ProjectsComponent,

    ContactInfoComponent,

    NotfoundComponent

  ],

  imports: [

    BrowserModule,

    AppRoutingModule

  ],

  providers: [{provide: LocationStrategy, useClass: HashLocationStrategy}],

  bootstrap: [AppComponent]

})

export class AppModule { }

There are a lot of stuff in my example code that is not present in your project, like other components. They do not matter in this case as we are just using this as a comparison to how it should generally look like.

Now that we are using hash location strategy, we’re almost done. Do another deployment as we did so earlier but do not push the code to GitHub just yet. We have one last thing we need to add (and I wish we didn’t need to do this).

When deployment is over, navigate to the docs folder and open index.html, locate and replace:

<base href="insertnameofprojecthere">

the base href could be showing anything here but it does not matter. We will be changing it to:

<base href="./index.html">

Save the file. Push the changes over to GitHub, wait for the uploading to finish.

Now we should have a fully functioning Deployment of the project with no refreshing difficulties!

Sorry for the short write up but this was the most time consuming sections of research I had to do on my own. Hopefully you don’t have as much difficulties as I did. The next part will be Local Reading & Usage of Local JSON on the web page. Much love and the next part should be coming soon!

Click here for the next part

Click here for the last part

by Steven Le at Sun May 03 2020 19:10:22 GMT+0000 (Coordinated Universal Time)

Saturday, May 2, 2020


Steven Le

Angular Bootstrap Project Developments Part 3

Welcome back, Nice to see ya’ll again.

I left off last time with the basics of starting up, from starting up the project to creating components of header and footer component such that they’re modular. To continue onto them let’s go through what I wanted to do. I wanted the header and footer to be modular so that the content would be the only thing changing. This is so I don’t have to reload all the elements over and over again.

The main topic I want to go through today is setting up the home page and setting up the image carousel. I remember this being a bit finicky because there was not much information (that I could find about the process). A lot of time was spent researching and styling it in such a way that I liked it, more time than I would like to admit.

I’m also going to try adding html code in the form of pictures as WordPress doesn’t have a free and easy solution of showing example HTML Code.

Home Component Generation & Routing

Let’s start through the process, we need to generate a component so we can use it with the routing.

ng generate component home

This creates the components needed to tie it into the web application. Just to make sure take a look at your app-routing.module.ts and make sure that the component is there under NgModule and Routes, RouterModule imports. If it is not, add the following right under it:

import { HomeComponent } from './home/home.component';
also add this to the list of routes:
{path: "home", component: HomeComponent}
It should look something like this.
We’re creating a route to the component to be used by the header navbar we made last time, the navlink to home. Now onto the meat and potatoes of today’s post: Image carousel.
One thing I noticed that was a side effect of having the modular header and footer components was that the website was just displaying the header and footer over my content. The method I went with in order to have it line up properly was to create padding on every component. (This is not best practice, just negligence in my case)
So the first thing you should is enter the home.component.html page and add the tags:
These are just standard declaration and a practice that should be done always so that the browser knows what it is displaying, in this case it’s HTML 5 (the web standard).
Within the body tags we’re going to create a div tag where we’re going to contain everything we’re going to be doing on the web page.
As a placeholder in order to see your screen you should add some text inside the div tags:
Now that we have that ready let’s set up the page’s styling so we can see the text.
Navigate to home.component.css and open it up.
It should be empty but let’s change that, let’s get the padding out of the way so we can see the text. Add this to the css file.
.website-background{

    /*add these to every web page to make footer align properly*/

    margin: 0;

    min-height: calc(100vh - 24px);

}

p{

   /* text that will be padded from the top*/

   position: relative;

   padding-top: 8vh;

}

The 2 things we lined out in the css file correspond to the stuff we just put in our html: website-background in order to pad the website down so you have a footer and the text having padding on top of it so that it can be displayed. Now this is not a long term solution with the terrible side effect of it changing all the document’s p (paragraph) tags to have an awful 8vh padding from the top but this will change soon.

Image Carousel

The meat and potatoes of this post, let’s start out with a prerequisite. The pictures will ideally be around the same scaling, similar size for you to have a streamlined experience with it. There are workarounds but getting this up and running was the priority for me so I did not have the time to work out the kinks. I added 2 images under the assets/images folder created with the project under the apps folder (folder path from the generation would be: project-name/src/app/assets/images/).

Here’s where I went back to home.component.html removed the p, paragraph tag, and created a lot of div tags in it’s place with specific reasons:

  • carousel to handle the container
  • carousel-size to handle the size
  • carouselControls in order to handle the controls
  • carousel-indicators in order to handle the indicators shown so you can see which image you are on
  • carousel-inner to initialize the carousel and carousel-item in order to initialize the images.

Here is the code I used with them:

The only things you would need to change are the links to the pictures you have supplied yourself under the comment: Initialize Carousel Images w/ Captions.

Sadly I wish there was more to explain but these was just baseline code/example code I was using in order to fill the purpose of creating an image carousel quickly.

Onto home.component.css, first thing is we need to remove the p section we created earlier as that was for example purposes.I added this code to the css file:

.website-background{

    background-color: #123C69;

    /*add these to every web page to make footer align properly*/

    margin: 0;

    min-height: calc(100vh - 48px);

}

.carousel{

    /* Shifts page down from the header */

    position: relative;

    /* 

    Minor change to account for header

    top: 35px; 

    */

    padding-top: 1vh;

    padding-bottom: 0.180vh;

    background-color: grey;

}

.carousel-inner{

    /* Handles carousel image properties */

    display: inline-block; 

    position: relative;

    height: 60vh;

}

.carousel-item{

    /* Handles picture framing and alignment */ 



    position: absolute;

    top: 50%;

    left: 50%;

    -webkit-transform: translate(-50%, -50%);

    -ms-transform: translate(-50%, -50%);

    transform: translate(-50%, -50%);

    max-height: 650px;

    width: auto;

}

These are the aspects and styling that I used in order for the image carousel to be padded from the top, be scaled properly to the photos that were added by me and also to position it in the middle full width of my page.

For the rest of my web page, I added a mission quote section with links to my social media, Github Portfolio and a ping to my Discordapp account. Here’s a link to my code and the styling of it. The reason I am not touching upon it is because it is fairly standard in format and there are much better tutorials for creating them, whereas I found it fairly difficult to find a decent image carousel explanation.

Here is a link to the work as I did it.

For next time, I’m going to go about deploying the web app itself and routing. It will be a shorter post but afterward we can go about another topic: Local JSON file reading.

Click here for the next part

Click here for the last part

by Steven Le at Sat May 02 2020 21:56:05 GMT+0000 (Coordinated Universal Time)


Corey James

RealWorld API – TypeScript, JWT, MongoDB, Express and Node.js

Hello, Welcome to my blog! Recently, I subscribed to a tutorial website called Thinkster.io. Thikster has tutorials dedicated to making production-ready applications. They have tutorials for many different frameworks. I decide to go through their node.js tutorial. I also wanted to get more comfortable with typescript, so I changed the tutorial up some by implementing …

Continue reading "RealWorld API – TypeScript, JWT, MongoDB, Express and Node.js"

by Corey James at Sat May 02 2020 04:40:55 GMT+0000 (Coordinated Universal Time)

Friday, May 1, 2020


Calvin Ho

Data Structures and Algorithms

I'm finally done all my courses and since the job market isn't that great right now, I have taken a different approach. Instead of working personal projects or contributing to open source, I've decided to brush up on data structures and algorithms for a bit.

One thing I found lacking for Seneca's Computer Science related programs was the science portion of Computer Science, maybe it was because I was enrolled in CPA and not their BSD program.

For CPA the only course that deals with data structures and algorithms, DSA555 is offered as a professional option. After taking the course I noticed why, as a pretty smart person in the class said "If this was a mandatory course, a lot of people would be dropping out of the program, it was pretty hard". I still wish there was another similar course or two offered so we could learn more about analyzing the run times of more complex functions and graphs.

I took DSA555 last winter and have more or less forgot how to implement or how most of the things I learned in the class work: linked lists, trees, different types of searches + sorts. So now as I am typing this blog, I am solving and looking at problems on leetcode.

A friend of mine currently works for Tesla and is looking for a new job. Most of the places he's been interviewing at for a full stack position have also asked him data structure and algorithm questions on top of questions involving joining two tables in SQL or how to mock a request for testing.

I think this is fair as it makes a developer conscious of the code they write and makes it easier to recognize patterns and respond accordingly.

For example, say I have an array of sorted numbers and I have to find if a given number exists:

I could loop through the array and check each element
Or
I could check if my given number is the middle element in the array. Depending on if it is bigger or smaller I can use the upper or lower half of the array and repeat the same steps, until the number is found or not found.

The second option sounds tedious, but depending on the size of the array, it may actually turn out to be faster than the initial option.

It also allows developers to think about the function they are writing performance-wise. Is it a O(n) solution? O(n^2) or worse O(n^3)? If it is the latter two, can I improve the run time of it? For personal projects this may not matter as much, but if you are working on software or systems that will be used by millions of people or contains a ton of data, these little things start to add up!

by Calvin Ho at Fri May 01 2020 16:05:58 GMT+0000 (Coordinated Universal Time)

Tuesday, April 28, 2020


David Humphrey

Experimenting with Twilio

I've always wanted an excuse to try working with Twilio for programmatic messaging and SMS.  This week I got my chance, and wanted to share some of what I learned.

One of the many side projects I maintain is an IoT system I've been cobbling together for my family and neighbours.  It's a mix of native code, node.js, and React that provides a user-friendly way to interact with local sensor data (video, images, XML, and JSON sources via an event stream).

During the stay-at-home pandemic, we've needed to use this data in new ways, and a recent request came in for mobile notifications based on certain sensor conditions.  My first thought was to use Web Push Notifications via a Service Worker in my web app.  However, I have to support Windows, macOS, Chrome (desktop and mobile), and Safari on iOS (iPad and iPhone).  Essentially, my users represent just about every platform and form-factor you can imagine, and unfortunately Safari doesn't support the Push API, so I gave up on that approach.

This left me thinking about SMS.  I knew that services like Twilio allow programmatic SMS, but I'd never tried using them before.  It turned out to be quite simple, taking about 30 minutes to add to my node.js web server.  Here's what's involved.

First, I had to understand which of the various "SMS," "MessageService," and "Notification" APIs I needed.  Twilio's docs are excellent, but I found it a bit confusing to land on exactly what I needed at first.

You create an account to start, and Twilo gives you some credits to play with upfront.  I didn't end up using these, since I didn't want to have extra text added to my messages.  I found it odd when I "upgraded" my account to instantly be charged $20 on my credit card vs. simply opening an account that they could charge using the pay-as-you-go model I'm using.

Next, I had to "buy" a phone number.  It's really "rent" a phone number monthly.  You can specify which features you need for the phone number, and pick a country and local area.  After some searching and figuring out their UI, I got a number in Toronto.  The phone numbers cost $1 per month.

Now I used my account console to get the Account SID and Auth Token I'd need to write code against their API.  Because I'm working in node.js, I used their open source node helper library:

npm install twilio

The code to send an SMS looks like this:

const Twilio = require('twilio');
const twilio = new Twilio(accountSID, authToken);

await twilio.messages.create({
  body: 'This is your text message content',
  to: '+15551235555',   // the number to text
  from: '+15553215555'  // the Twilio number you bought
});

The phone numbers you use have to be in the +15551234567 format.  I needed to text multiple numbers at the same time for various events, so I do something like:

await Promise.all(
  [+15551234567', ...].map((number) =&gt;
    twilio.messages.create({
      body: '...',
      to: number,
      from: '...',
    })
  )
);

That's basically it.  It's amazingly simple.

Now let's talk about cost.  I was surprised at how expensive it is.  I'm sending SMS messages in Canada, and my real cost per text looks like this:

  • Outbound SMS: $0.0085 (Carrier Fee) + $0.0075 (Twilio) = $0.016
  • Inbound SMS: $0.0085 (Carrier Fee) + $0.075 (Twilio) = $0.0835

So it basically costs 1.6 cents to send, and 8 cents to receive a text with Twilio in Canada.  That's more than double what Twilio tells you, unless you really squint at the fine print--surprise! Canadian telecom carriers want their cut of the action too.

It might not sound like a lot per text, but it adds up as you start scaling this.  I noticed today that Twitter has disabled it's SMS tweet feature, I assume due to cost, despite their deals with carriers.  In my case, based on historical sensor data, it would have cost me $13.84 to send these notifications last week.  So far I've spent a little over $4 since I started running this new feature.

Needless to say, I'm pretty motivated to make sure my code behaves and throttle and debounce excessive events.  It's too bad Twilio's clients don't bake this in automatically, but they do have some nice error and event reporting tools that let you keep track of things in real time.

All in all, I'm pleased with the service.  I suspect when the pandemic is over I'll probably end the experiment, and it will be interesting to see what the final bill is.  Until then, it works perfectly.  I think this is called "You get what you pay for."

by David Humphrey at Tue Apr 28 2020 21:58:00 GMT+0000 (Coordinated Universal Time)