Planet CDOT (Telescope)

Wednesday, June 3, 2020

OMG! Ubuntu

Geary email client is testing a responsive (phone-friendly) UI

I’m a big fan of desktop e-mail client Geary — it’s in our list of the best Ubuntu apps after all — so I’m particularly thrilled to hear that a “mobile version” is in the […]

This post, Geary email client is testing a responsive (phone-friendly) UI is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Wed Jun 03 2020 02:13:00 GMT+0000 (Coordinated Universal Time)

Tuesday, June 2, 2020

OMG! Ubuntu

Lenovo Will Sell Ubuntu on More ThinkPads, ThinkStations This Summer

Lenovo has announced that all of its ThinkStation desktop PCs and ThinkPad P series laptop will be available to buy preloaded with Ubuntu this summer.

This post, Lenovo Will Sell Ubuntu on More ThinkPads, ThinkStations This Summer is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Tue Jun 02 2020 19:09:16 GMT+0000 (Coordinated Universal Time)

Firefox 77 Released with Minor Changes (So Don’t Get Excited)

Firefox 77 has arrived with a whole load of …nothing major. Still, iterative improvement is as welcome as shiny new features so read on to discover more.

This post, Firefox 77 Released with Minor Changes (So Don’t Get Excited) is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Tue Jun 02 2020 17:41:30 GMT+0000 (Coordinated Universal Time)

Linux Marketshare Increased Again Last Month (So Did Ubuntu’s)

Usage of Linux-based desktop operating systems grew again in May 2020, according to stats shared by web analytics firm NetMarketShare.

This post, Linux Marketshare Increased Again Last Month (So Did Ubuntu’s) is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Tue Jun 02 2020 15:23:58 GMT+0000 (Coordinated Universal Time)


Pocket provides fascinating reads from trusted sources in the UK with newest Firefox

It’s a stressful and strange time. Reading the news today can feel overwhelming, repetitive, and draining. We all feel it. We crave new inputs and healthy diversions—stories that can fuel our minds, spark fresh ideas, and leave us feeling recharged, informed, and inspired.

Connecting people with such stories is what we do at Pocket. We surface and recommend exceptional stories from across the web to nearly 40 million Firefox users in the U.S., Canada, and Germany each month. More than 4 million subscribers to our Pocket Hits newsletters (available in English and in German) see our curated recommendations each day in their inboxes.

Today we’re pleased to announce the launch of Pocket’s article recommendations for Firefox users in the United Kingdom. The expansion into the UK was made seamless thanks to our successes with English-language recommendations in the U.S. and Canada.

What does this mean for Firefox users in the UK? Open a new tab every day and see a curated selection of recommended stories from Pocket. People will see thought-provoking essays, hidden gems, and fascinating deep-dives from UK-based publishers both large and small — and other trusted global sources from across the web.

Open a new tab to see a curated selection of recommended stories

Where do these recommendations come from? Pocket readers. Pocket has a diverse, well-read community of users who help us surface some of the best stories on the web. Using our flagship Pocket app and save button (built right into Firefox), our users save millions of articles each day. The data from our most saved, opened, and read articles is  aggregated; our curators then sift through and recommend the very best of these stories with the wider Firefox and Pocket communities.

The result is a unique alternative to the vast array of content feeds out there today. Instead of breaking news, users will see stories that dig deep into a subject, offer a new perspective, and come from outlets that might be outside their normal reading channels. They’ll find engrossing science features, moving first-person narratives, and entertaining cooking and career how-tos. They’ll discover deeply reported business features, informative DIY guides, and eye-opening history pieces. Most of all, they’ll find stories worthy of their time and attention, curated specifically for Firefox users in the United Kingdom. Publishers, too, will benefit from a new stream of readers to their high-quality content.

Pocket delivers these recommendations with the same dedication to privacy that people have come to expect from Firefox and Mozilla. Recommendations are drawn from aggregate data and neither Mozilla nor Pocket receives Firefox browsing history or data, or is able to view the saved items of an individual Pocket account. A Firefox user’s browsing data never leaves their own computer or device. .

We welcome new Pocket readers in the UK — alongside our readers in the U.S., Canada, and Germany — and hope you find your new tab is a breath of fresh air and a stimulating place to refuel and recharge at a time when you may be needing it most.

Download Firefox to get thought-provoking stories from around the web with every new tab. Be sure to enable the recommendations to begin reading.

The post Pocket provides fascinating reads from trusted sources in the UK with newest Firefox appeared first on The Mozilla Blog.

by Mozilla at Tue Jun 02 2020 13:00:03 GMT+0000 (Coordinated Universal Time)

Monday, June 1, 2020


We’ve Got Work to Do

The promise of America is “liberty and justice for all.” We must do more to live up to this promise. The events of last week once again shine a spotlight on how much systemic change is still required. These events  — the deaths at the hands of police and civilians, the accusations that are outright lies — are not new, and are not isolated. African Americans continue to pay an obscene and unacceptable price for our nation’s failure to rectify our history of racial discrimination and violence. As a result, our communities and our nation are harmed and diminished.

Change is required. That change involves all of us. It’s not immediately clear all the actions an organization like Mozilla should take, but it’s clear action is required. As a starting point, we will use our products to highlight black and other under-represented voices in this unfolding dialog. And we’re looking hard at other actions, across the range of our activities and assets.

Closer to home we’ve reiterated our support for our black colleagues. We recognize the disproportionate impact of these events, as well as the disproportionate effect of COVID-19 on communities of color. We recognize that continued diligence could lead others to think it is “business as usual.” We know that it is not.

And this has left many of us once again, questioning how to meaningfully make our world better. As our starting point, Mozilla is committed to continuing to support our black employees, expanding our own diversity, and using our products to build a better world.

The post We’ve Got Work to Do appeared first on The Mozilla Blog.

by Mozilla at Mon Jun 01 2020 19:14:44 GMT+0000 (Coordinated Universal Time)

OMG! Ubuntu

Foliate Makes Finding Free eBooks Easier, Adds Support for Comics

Finding free ebooks to read in Foliate, a GTK ebook reader for Linux desktops, just got a whole lot easier. The new Foliate 2.2.0 release comes with several enhancements, one of which is better eBook […]

This post, Foliate Makes Finding Free eBooks Easier, Adds Support for Comics is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Mon Jun 01 2020 14:30:54 GMT+0000 (Coordinated Universal Time)

Linux 5.7 Released, This is What’s New

Linux 5.7 has arrived, serving as the latest mainline release of the Linux Kernel — but what’s changed? Well, in this post we recap the new features and core changes bundled up inside this kernel […]

This post, Linux 5.7 Released, This is What’s New is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Mon Jun 01 2020 11:51:04 GMT+0000 (Coordinated Universal Time)

Friday, May 29, 2020

Corey James

React Practice With GraphQL

Hello, and Welcome to my blog! Recently I have been working through some tutorials from In this blog post, I will be reviewing my experience going through the React and Redux tutorial. I completed the tutorial using the GraphQL API I made. The tutorial uses a REST API, so I had to make lots …

Continue reading "React Practice With GraphQL"

by Corey James at Fri May 29 2020 02:41:20 GMT+0000 (Coordinated Universal Time)

Thursday, May 28, 2020

OMG! Ubuntu

The Raspberry Pi 4 is Now Available With 8GB RAM

A Raspberry Pi 4 with 8GB RAM is now available to buy. This model is the most powerful version of the device released so far, and the most expensive at $75.

This post, The Raspberry Pi 4 is Now Available With 8GB RAM is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Thu May 28 2020 14:31:11 GMT+0000 (Coordinated Universal Time)

Android Mirroring App ‘Scrcpy’ Just Added a Bunch of New Features

If you read this blog regularly enough you’ll be familiar with scrcpy, an ace root-free way to mirror your Android smartphone on your Ubuntu desktop and interact with it. Scrcpy is free, it’s open source, […]

This post, Android Mirroring App ‘Scrcpy’ Just Added a Bunch of New Features is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Thu May 28 2020 01:11:33 GMT+0000 (Coordinated Universal Time)


Mozilla’s journey to environmental sustainability

Process, strategic goals, and next steps

The programme may be new, but the process has been shaping for years: In March 2020, Mozilla officially launched a dedicated Environmental Sustainability Programme, and I am proud and excited to be stewarding our efforts.

Since we launched, the world has been held captive by the COVID-19 pandemic. People occasionally ask me, “Is this really the time to build up and invest in such a large-scale, ambitious programme?” My answer is clear: Absolutely.

A sustainable internet is built to sustain economic well-being and meaningful social connection, just as it is mindful of a healthy environment. Through this pandemic, we’re reminded how fundamental the internet is to our social connections and that it is the baseline for many of the businesses that keep our economies from collapsing entirely. The internet has a significant carbon footprint of its own — data centers, offices, hardware and more require vast amounts of energy. The climate crisis will have lasting effects on infrastructure, connectivity and human migration. These affect the core of Mozilla’s business. Resilience and mitigation are therefore critical to our operations.

In this world, and looking towards desirable futures, sustainability is a catalyst for innovation.

To embark on this journey towards environmental sustainability, we’ve set three strategic goals:

  • Reduce and mitigate Mozilla’s operational impact;
  • Train and develop Mozilla staff to build with sustainability in mind;
  • Raise awareness for sustainability, internally and externally.

We are currently busy conducting our Greenhouse Gas (GHG) baseline emissions assessment, and we will publish the results later this year. This will only be the beginning of our sustainability work. We are already learning that transparently and openly creating, developing and assessing GHG inventories, sustainability data management platforms and environmental impact is a lot harder than it should be, given the importance of these assessments.

If Mozilla, as an international organisation, struggles with this, what must that mean for smaller non-profit organisations? That is why we plan to continuously share what we learn, how we decide, and where we see levers for change.


Four principles that guide us:

Be humble

We’re new to this journey and the larger environmental movement as well as recognising that the mitigation of our own operational impact won’t be enough to address the climate crisis. We understand what it means to fuel larger movements that create the change we want to see in the world. We are leveraging our roots and experience towards this global, systemic challenge.

Be open

We will openly share what we learn, where we make progress, and how our thinking evolves — in our culture as well as in our innovation efforts. We intend to focus our efforts and thinking on the internet’s impact. Mozilla’s business builds on and grows with the internet. We understand the tech, and we know where and how to challenge the elements that aren’t working in the public interest.

Be optimistic

We approach the future in an open-minded, creative and strategic manner. It is easy to be overwhelmed in the face of a systemic challenge like the climate crisis. We aim to empower ourselves and others to move from inertia towards action, working together to build a sustainable internet. Art, strategic foresight, and other thought-provoking engagements will help us imagine positive futures we want to create.

Be opinionated

Mozilla’s mission drives us to develop and maintain the internet as a global public resource. Today, we understand that an internet that serves the public interest must be sustainable. A sustainable internet is built to sustain economic wellbeing and meaningful social connection; it is also mindful of the environment. Starting with a shared glossary, we will finetune our language, step up, and speak out to drive change.


I look forward to embarking on this journey with all of you.

The post Mozilla’s journey to environmental sustainability appeared first on The Mozilla Blog.

by Mozilla at Thu May 28 2020 13:13:43 GMT+0000 (Coordinated Universal Time)

Wednesday, May 27, 2020

OMG! Ubuntu

GNOME Devs Make Major Improvements to the Apps Grid

Some interesting things are happening upstream in GNOME Shell that affect the "Applications" screen, app folders, and the associated code.

This post, GNOME Devs Make Major Improvements to the Apps Grid is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Wed May 27 2020 15:11:02 GMT+0000 (Coordinated Universal Time)

Monday, May 25, 2020

OMG! Ubuntu

First Ubuntu 20.04 Point Release Arrives July 23

The Ubuntu 20.04.1 point release is due for release on July 23, 2020. The update doesn't have a new hardware enablement stack but is notable for LTS users.

This post, First Ubuntu 20.04 Point Release Arrives July 23 is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Mon May 25 2020 13:31:42 GMT+0000 (Coordinated Universal Time)

Sunday, May 24, 2020

OMG! Ubuntu

Transmission 3.0 Released, Here’s How to Install it on Ubuntu

A new version of open-source torrent client Transmission is available to download. In this post I share details on what’s changed and show you how to install the update on your system using the official Transmission […]

This post, Transmission 3.0 Released, Here’s How to Install it on Ubuntu is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Sun May 24 2020 16:45:43 GMT+0000 (Coordinated Universal Time)

Friday, May 22, 2020


The USA Freedom Act and Browsing History

Last Thursday, the US Senate voted to renew the USA Freedom Act which authorizes a variety of forms of national surveillance. As has been reported, this renewal does not include an amendment offered by Sen. Ron Wyden and Sen. Steve Daines that would have explicitly prohibited the warrantless collection of Web browsing history. The legislation is now  being considered by the House of Representatives and today Mozilla and a number of other technology companies sent a letter urging them to adopt the Wyden-Daines language in their version of the bill. This post helps fill in the technical background of what all this means.

Despite what you might think from the term “browsing history,” we’re not talking about browsing data stored on your computer. Web browsers like Firefox store, on your computer, a list of the places you’ve gone so that you can go back and find things and to help provide better suggestions when you type stuff in the awesomebar. That’s how it is that you can type ‘f’ in the awesomebar and it might suggest you go to Facebook.

Browsers also store a pile of other information on your computer, like cookies, passwords, cached files, etc. that help improve your browsing experience and all of this can be used to infer where you have been. This information obviously has privacy implications if you share a computer or if someone gets access to your computer, and most browsers provide some sort of mode that lets you surf without storing history (Firefox calls this Private Browsing). Anyway, while this information can be accessed by law enforcement if they have access to your computer, it’s generally subject to the same conditions as other data on your computer and those conditions aren’t the topic at hand.

In this context, what “web browsing history” refers to is data which is stored outside your computer by third parties. It turns out there is quite a lot of this kind of data, generally falling into four broad categories:

  • Telecommunications metadata. Typically, as you browse the Internet, your Internet Service Provider (ISP) learns every website you visit. This information leaks via a variety of channels (DNS lookups), the IP address of sites, TLS Server Name Indication (SNI), and then ISPs have various policies for how much of this data they log and for how long. Now that most sites have TLS Encryption this data generally will be just the name of the Web site you are going to, but not what pages you go to on the site. For instance, if you go to WebMD, the ISP won’t know what page you went to, they just know that you went to WebMD.
  • Web Tracking Data. As is increasingly well known, a giant network of third party trackers follows you around the Internet. What these trackers are doing is building up a profile of your browsing history so that they can monetize it in various ways. This data often includes the exact pages that you visit  and will be tied to your IP address and other potentially identifying information.
  • Web Site Data. Any Web site that you go to is very likely to keep extensive logs of everything you do on the site, including what pages you visit and what links you click. They may also record what outgoing links you click. For instance, when you do searches, many search engines record not just the search terms, but what links you click on, even when they go to other sites. In addition, many sites include various third party analytics systems which themselves may record your browsing history or even make a recording of your behavior on the site, including keystrokes, mouse movements, etc. so it can be replayed later.
  • Browser Sync Data. Although the browsing history stored on your computer may not be directly accessible, many browsers offer a “sync” feature which lets you share history, bookmarks, passwords, etc. between browser instances (such as between your phone and your laptop). This information has to be stored on a server somewhere and so is potentially accessible. By default, Firefox encrypts this data by default, but in some other browsers you need to enable that feature yourself.

So there’s a huge amount of very detailed data about people’s browsing behavior sitting out there on various servers on the Internet. Because this is such sensitive information, in Mozilla’s products we try to minimize how much of it is collected with features such as encrypted sync (see above) or enhanced tracking protection. However, even so there is still far too much data about user browsing behavior being collected and stored by a variety of parties.

This information isn’t being collected for law enforcement purposes but rather for a variety of product and commercial reasons. However, the fact that it exists and is being stored means that it is accessible to law enforcement if they follow the right process; the question at hand here is what that process actually is, and specifically in the US what data requires a warrant to access — demanding a showing of ‘probable cause’ plus a lot of procedural safeguards — and what can be accessed with a more lightweight procedure. A more detailed treatment of this topic can be found in this Lawfare piece by Margaret Taylor, but at a high level, the question turns on whether data is viewed as content or metadata, with content generally requiring a more heavyweight process and a higher level of evidence.

Unfortunately, historically the line between content and metadata hasn’t been incredibly clear in the US courts. In some cases the sites you visit (e.g., are treated as metadata, in which case that data would not require a warrant. By contrast, the exact page you went to on WebMD would be content and would require a warrant. However, the sites themselves reveal a huge amount of information about you. Consider, for instance, the implications of having Ashley Madison or Stormfront in your browsing history. The Wyden-Daines amendment would have resolved that ambiguity in favor of requiring a warrant for all  Web browsing history and search history. If the House reauthorizes USA Freedom without this language, we will be left with this somewhat uncertain situation but one where in practice much of people’s activity on the Internet  — including activity which they would rather keep secret —  may be subject to surveillance without a warrant.

The post The USA Freedom Act and Browsing History appeared first on The Mozilla Blog.

by Mozilla at Fri May 22 2020 15:29:13 GMT+0000 (Coordinated Universal Time)

Protecting Search and Browsing Data from Warrantless Access

As the maker of Firefox, we know that browsing and search data can provide a detailed portrait of our private lives and needs to be protected. That’s why we work to safeguard your browsing data, with privacy features like Enhanced Tracking Protection and more secure DNS.

Unfortunately, too much search and browsing history still is collected and stored around the Web. We believe this data deserves strong legal protections when the government seeks access to it, but in many cases that protection is uncertain.

The US House of Representatives will have the opportunity to address this issue next week when it takes up the USA FREEDOM Reauthorization Act (H.R. 6172). We hope legislators will amend the bill to limit government access to internet browsing and search history without a warrant.

The letter in the link below, sent today from Mozilla and other companies and internet organizations, calls on the House to preserve this essential aspect of personal privacy online.

Read our letter (PDF)


The post Protecting Search and Browsing Data from Warrantless Access appeared first on The Mozilla Blog.

by Mozilla at Fri May 22 2020 15:20:25 GMT+0000 (Coordinated Universal Time)

OMG! Ubuntu

Ubuntu 20.10 Release Date & Planned Features

With Ubuntu 20.04 LTS done and dusted developer attention now turns towards Ubuntu 20.10 which is due for release on October 22, 2020. Learn when Ubuntu 20.10 will be released and what new features it […]

This post, Ubuntu 20.10 Release Date & Planned Features is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Fri May 22 2020 13:31:00 GMT+0000 (Coordinated Universal Time)

Tuesday, May 19, 2020

OMG! Ubuntu

Notorious is new keyboard-driven note taking app for Linux

Last week I spotlighted Noted, a (rather splendid) keyboard-driven note taking app for macOS and Linux — but some of you weren’t convinced. It wasn’t the app per se; you liked its clean UI, and […]

This post, Notorious is new keyboard-driven note taking app for Linux is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Tue May 19 2020 21:10:58 GMT+0000 (Coordinated Universal Time)

Microsoft’s Open Source Terminal App Hits Version 1.0

Microsoft's Windows Terminal app has hit version 1.0. The open source app is available to install on Windows 10 from the Microsoft Store and on GitHub.

This post, Microsoft’s Open Source Terminal App Hits Version 1.0 is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Tue May 19 2020 18:19:43 GMT+0000 (Coordinated Universal Time)

Windows 10 is Getting Support for GUI Linux Apps

Dream of being able to run your favourite Linux apps on the Windows 10 desktop? Me neither, but Microsoft is going ahead and doing it anyway in WSL 2.

This post, Windows 10 is Getting Support for GUI Linux Apps is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Tue May 19 2020 16:56:49 GMT+0000 (Coordinated Universal Time)

How to Give Audacity Audio Editor a Flat, Dark & Modern Look on Ubuntu

Audacity is a powerful audio editing tool but its appearance is utilitarian looking. In this post we show you how to theme Audacity with a dark, flat look.

This post, How to Give Audacity Audio Editor a Flat, Dark & Modern Look on Ubuntu is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Tue May 19 2020 14:32:29 GMT+0000 (Coordinated Universal Time)

Adam Pucciano

Python Series: Finishing Touches

This is part of my ongoing series to explore the uses of Python to create a real-time dashboard display for industrial machinery. Please read the preceding parts first!

Part 3: Finishing Touches 

Dashboard gauges indicating the Machine’s cycle time and efficiency

Things were finally coming together! It was now a matter of integrating more machines , optimizing page loading. I was also in a good position to add a few more quality of life features to the application, all of which were readily handled in Python.

FTP: Machines were still logging information to binary files , and had accessible storage via FTP. Creating a small manager class using Python’s standard ftplib was a great alternative option to view historic information or download it all as an archive. I would then call on this manager script to help me load information during POST/GET requests.

File Viewer/Upload: Some machines in the field do not have internet access, and have a long running history of its production written in binary files. The machine’s locale files are the only thing that keeps a record of its production efficiency. Reusing components from the statistics dashboard, a simple dialog to upload files working with a file viewer emerged that allows the user to upload and interact with the report and display results the same way connected machines can be viewed. This is could extend usefulness for files that originated from a simulation snapshot.

Redis and Docker: One of of the many Django integrations, and modern tools every web service is looking to take advantage of in order to deliver cohesive, fluid content to their users. Used by company tech giants like Twitter, Github and Pintrest these two technologies work together to cache and session information and reduce query and server consumption due to usage. It would take another series to cover these two technologies in depth, both of which I only really had a definitions worth of understanding before this project began. But by using the Django help documentation in conjunction with Docker installation guides for Redis made it really straight forward to incorporate this within a few days. Django-plotly-dash makes use of channels (channels_redis) for live updating. It’s amazing how all these libraries start to chain together and very quickly help deploy what is needed much more professionally. Which at first the syntax was actually too simple, it really confused me on how it all worked. (I suspect a lot of magic in the underlying framework). With a few changes to the settings, and reading some introduction tutorials, I had made my application capture data through a Redis cache running in a Docker container.

CSS/Bootstrap: This project gave me a chance to work on my css and javascript (front end) skills. I am certain I still need a lot of work in this area, but every day I see small improvements to the overall look and feel of the interface. I know that it with more practice, it will soon seem to transform into a fluid, dynamic UI.

Python was very portable. By using a ‘requirements.txt’ file I could easily move my development environment to a new machine. Remember also to spin up a virtual environment first, and it works very well with the Django manager ( To learn all of this new content may be daunting at first, but stick with it and I promise you it will be worth it (It always is)! This Python approach to the dashboard became easier and easier to make real. Each module was supported by ingenious packages made from the community.

This project has been a a great experience, and a great stepping stone to continue to improve my own skills while at the same time providing a fancy new piece of software.

This project seemed to evolve at a rapid pace, and I too feel like I had leveled up because of it. During this journey, I was able to..

  • Break out of a comfort zone and begin to master Python for software development.
  • Lead my own research and create a development plan for my own application
  • Learn more about industrial manufacturing and injection molding industry
  • Read through, understand documentation and become a part of various Python communities
  • Touch upon and learn OPCUA, understanding how the protocol establishes communication between machinery
  • Brush up on my Linux skills, initializing and hosting an internal web server using Ubuntu
  • Learn how to create a Docker container running Redis
  • Work on my writing skills to communicate all this wonderful news!

Thanks for reading!



Please feel free to contact me about any of the sections you’ve read. I’d love to discuss it further or clarify any points you as the reader come across.

by Adam Pucciano at Tue May 19 2020 17:49:02 GMT+0000 (Coordinated Universal Time)

Monday, May 18, 2020

OMG! Ubuntu

KDE Plasma 5.19 Arrives Soon, This is What’s New

KDE Plasma 5.19 features a modest crop of changes, with various visual improvements and usability enhancements designed to make the release more enjoyable.

This post, KDE Plasma 5.19 Arrives Soon, This is What’s New is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Mon May 18 2020 17:38:18 GMT+0000 (Coordinated Universal Time)

Audacity 2.4 Released with New Audio Effects, New Time Toolbar (Updated)

Audacity, the open source audio editor, has a new version available to download with a new noise reduction effect, improved time toolbar, and other changes.

This post, Audacity 2.4 Released with New Audio Effects, New Time Toolbar (Updated) is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Mon May 18 2020 00:41:28 GMT+0000 (Coordinated Universal Time)

Yoosuk Sim

Setting up IRC

About IRC

Internet Relay Chat was once the most popular real-time communication method that uses computer for a large spectrum of community groups. While it has largely been supplanted by more contemporarly methods for most people, IRC is still a prefered means of communication among older software projects like GCC.

The Client

As an old technology that is also very favored among programmers, IRC has many clients with different flavors, from GUI to CLI to headless. As a linux user with strong attraction to tmux, I chose weechat. Depending on your distro or OS, install your client first.

Configuring Weechat

I will be using GCC channel in OFTC server as the example.

How are we configuring this?

While Weechat has a configuration file, Weechat officially advises against using the file to configure the program behavior. Instead, it promotes `/set` dialog within the client to set the proper configuration.

Connect to server

Let's first connect to server. The `#gcc` channel is hosted at the OFTC server ( Let's connect to it with non-SSL connection first: `/connect`.

Set nick

Once connected, our goal is to setup and own a nick name. Likely, weechat is already using your login name, but if you desire a different name, or if the current name is already taken, you would need to issue `/nick NAME_HERE`. Change `NAME_HERE` with an appropriate nickname.

Register nick

Once an appropriate, free nick is chosen, let's register it so that it uniquely identifies the user. The server has a service named NickServ. Its primary job is to, as the name suggests, service nicknames. Users interact with NickServ by sending messages to it. To register our nick, send the following: `/msg NickServ REGISTER YOUR_PASSWORD_HERE YOUR@EMAIL.HERE`, replacing the password and email as appropriate. Depending on servers, there may be extra steps involved. For OFTC, I had to log into OFTC web interface, send out verification email, and verify via emailed link.

Register SSL to the nick

Adopted from OFTC site.
  • Generate cer and key file:`openssl req -nodes -newkey rsa:2048 -keyout nick.key -x509 -days 3650 -out nick.cer`
  • Generate pem file: `cat nick.cer nick.key > nick.pem`
  • Set permissions: `chmod 400 nick.pem nick.key`
  • Copy the files: `mkdir -p ~/.weechat/certs && mv nick.* ~/.weechat/certs`
  • Within Weechat, add server: `/server add OFTC -ssl -ssl_verify -autoconnect`
  • Within Weechat, add certs: `/set irc.server.OFTC.ssl_cert %h/certs/nick.pem`
  • Quit and restart Weechat
  • connect to server: `/connect OFTC`
  • Identify yourself: `/msg NickServ IDENTIFY YOUR_PASSWORD_HERE`
  • Associate Nick to Cert: `/msg nickserv cert add`
  • close everything and reconnect to server to verify connection and Nick authentication

Other nitty-tidy settings

turn on autoconnect, and give default channel to connect to for autojoin.

Other things to consider

I need to get highlights to work properly so that if anyone talks with my id, it is easy to spot when I return to the chat. I also am interested in running this headless on another server/VM. Also, the script for notify seems an interesting feature. This blog post and another post seem to provide some interesting options for scripts. This gitgist also provides a wealth of information.

by Yoosuk Sim at Mon May 18 2020 22:31:47 GMT+0000 (Coordinated Universal Time)

Sunday, May 17, 2020

OMG! Ubuntu

Enlightenment 0.24 Released with Assorted Changes

Enlightenment 0.24 is available to download. The latest release of the Linux and BSD desktop includes an improved screenshot tool and lower memory usage.

This post, Enlightenment 0.24 Released with Assorted Changes is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Sun May 17 2020 23:51:51 GMT+0000 (Coordinated Universal Time)

Saturday, May 16, 2020

Bowei Yao

A depressing scene

My wife and I went to A&W today to get some burgers. This particular A&W is located inside a convenience store.

We finished ordering and we were just looking around the shelves in the convenience store while waiting for our burgers to be made. Suddenly a man in line called out to us.

“Hey, you speak her language?”


He gestured towards a short lady leaning over the checkout counter of the convenience store.

“You speak her language?”

“I don’t know. What’s going on?”

“You wanna help her?”

It took me a while to realize the situation. The lady was probably holding up the line for a while now due to miscommunications.

So I approached her while keeping my distance from everyone, since you know, this is the coronavirus special period. She’s short, fairly aged (I would say around 50-60 at least), and Asian.

I speak Mandarin, so I asked her in that. She answered. It didn’t take long before we figured out that her credit card is expired. So she said that she’s going to go back to her car to get a new card. I translated that to the cashier and we let the man, who happens to be the next customer in line to get his order processed.

Now, up to this point, everything is fine. Everybody’s happy and nothing is wrong.

We got our A&W burgers and walked out of the convenience store, and this is what I see:

The Asian lady was standing next to an Audi SUV with the driver’s door open. Sitting in the driver’s seat is a young man wearing black sunglasses, arguing with the lady.

“What do you mean it doesn’t work?”

“The card won’t work, I’ve tried many times.”

“How can you be so stupid?”


I thought this was a scene that only appears in literature, but I guess literature does take its roots from real life after all.

For those of you don’t know, this is a stereotypical type of behaviour and mannerism exhibited by spoiled fuerdais. You may google this term now, as it is now officially enlisted into the English language. It is a derogative term aimed at the children of the nouveau rich from China.

So what are the things wrong with this picture?

Let’s unravel bit by bit. Now, it’s fairly simple to see that it’s argument between a mother and a son.

Perhaps you’ve seen some spoiled kids in your life – perhaps your neighbor’s kids or your relative’s kids. But this is a new level of spoiledness – this is a level of spoiledness which you have never seen before. This is a level which you would not accept nor agree with under any circumstances. This level of spoiledness is rarely exhibited by western parents, nor tolerated.

However, this type of overprotective parent/kid pairings are common in China, where the parents not only perform everything for their kid, but also think on their kid’s behalf. The kid’s future, the kid’s school, the kid’s extra-curricular activities, what the kid wants to do in his/her free time, etc. everything. The parents are thinking and planning all of that for the kid, and also acting them out, regardless of the kid’s opinion.

… until the kid has reached the age of 20, and in more extreme cases, age of 30 and beyond. The parents has trouble letting go because it has transformed into a long time habit. The kid has gotten used to it, and in the back of his/her mind, he/she thinks they deserve it, and everything should be just the way it is, taken for granted.

Therefore, in this case, you see an elderly lady, walking around wobbly, running errands, while the young boy sits in the shiny car texting on his phone. There are no sense of respect in the words that come out of his mouth towards his mother, and no subjective action exhibited from his body. He does not think to leave the car to help his mother, or go in his mother’s place to do a simple purchase, despite the fact that her mother has great difficulty in communications.

by Bowei Yao at Sat May 16 2020 02:25:51 GMT+0000 (Coordinated Universal Time)

Friday, May 15, 2020

Adam Pucciano

Python Series: From app to dashboard

This write-up is part of my ongoing Python series, please check out the introduction or part 2 before continuing!

Part 3: Going Online

So now existed an application where one could take old machine data files, run them through a sort of ‘processor’, and see the results. It was a nice little applet, but it was time to take the next step. This sort of application needed to be better served. Its access should be more flexible and that’s when I made the pivot towards a web-based platform.

Plotly-dash had partly integrated a web-view and navigation system where the programmer  could define a route to view the dashboard content. The issue I had with this implementation was that I could not share this view among multiple machines. The location definition was to be provided directly on the script that managed the dashboard.

if __name__ == “__main__”:

app.run_server(host=’′, port=’8888′)

This snippet was at the bottom of my dashboard pages (which was actually super useful during testing), which basically indicated that if this is the main entry point of program, invoke a server to run on the specified parameters. This was useful for testing because I could spin up a really simple dashboard in order to try out another approach or feature of the plotly-dash library. But I was almost done this phase, I needed to change where this was being called, and move it up the hierarchy.

An alternative was to allow the user to change the data via a drop-down menu, but I had already deduced many issues moving forward with this approach, namely that the dashboard page would still be the main entry point for the application. I wanted to hit a unique URL that pertained to a particular machine, and give very little responsibility to the user to correctly select the reports to display. Plus, I still needed to define a central hub, a destination where the user could log in and navigate to a particular machine and make some decisions on what to view. Plotly-dash did not really support this. I needed to take a step back to research and experiment with Python servers and how to program web services in Python.

The real heroes here are the development devs at Django. This framework is an interesting one that has taken off in popularity quite a bit. It also comes in a plethora of flavours. Another Django variant I am currently experimenting with is Django-oscar which I hope to discuss in the near future in another blog. The reason I chose this framework oppose to Flask is because it allowed me to easily set up multiple dashboards that would be served from one host URL. Managing assets and connections is also something that comes out of the box.  I needed a dashboard for each machine that would also use the same template, but to display machine data respective to the chosen machine. I did not want to manually define a page for each new machine connected to this project.

All the machine resources would be connected to this server – thus only one true connection to the machine had to be made. All other clients would then share this connection when viewing the machine in order to receive updates. Looking back on it now this could have been possible with the Flask library, but Django (and more so Django-plotly-dash) enabled this quite nicely, promoting the use of multiple apps within the project but also giving management tools that were built-in to the framework. Of course, this was a tremendous upgrade from the basic use of  Plotly-dashboards which was designed to serve a URL on the oriented around dashboard itself.

Again, I was just left with my dashboard, seemingly unchanged only now accessible via the web. At this point, I was able to navigate to my dashboard through some boiler plate looking web portal. My progress was looking even better though as now I had a concrete platform and I could start to implement bigger changes!

With the introduction of Django-plotly-dash, much of the project shifted gears. I was implementing a lot more packages that handled many different pieces of execution mentioned previously like; pyodbc, free-opcua, ftplib, and json. I also experienced use with the built-in Django manager (, and settings files.Both of which come with a lot of documentation to harness much of its flexibility. It was also the first time I started to take advantage of Python virtual environments (venv).

Virtual environments are self contained areas where one can use and install modules without effecting the current versions you have on your system. I like to imagine it as the .git of python packages. It was embarrassing I did not use this technique sooner as it allowed me to try out different versions of all the packages I had been using, and keep them better up to date with a requirements.txt. If you do not know about Virtual environments yet, I urge you to read more about them before continuing any Python project you have!

Templates were the biggest feature from Django-ploly-dash. It made it very easy to embed a dashboard into an html page. It was a little difficult to understand at first, partly because it works with such a lightweight syntax. With some supplementary reading, I was able to get a working version using this approach. I could even display multiple dashboard on one page which was a great success.

An early iteration of both dashboards on a single page

This type of view was made possible by creating my own Jinja2 template tag. It was necessary to create such a tag to handle the conversion of a dictionary to Json format. I named it jsonify, and was loaded on a page like so;

{% load jsonify %}

Since many of the dash components would be loaded with the ‘value’ property much like javascript, I used it to append the ‘value’ tag to whatever data I needed to get to the page from within the dictionary. By using;

new_data[header] = { ‘value’ :data.get(header) }

I could prepare any column data for the dashboard, even data read from a database.

So my incoming variables looked like this:

oee_value : { ‘value’ : ’99’ }

Which would be mapped to the component named “oee_value”, and automatically apply the default value 99 to whatever type of component it was.

I used this technique for several components on the page. But what about components that did not have a ‘value’ parameter by default? Some components were ‘text’ based or driven by some boolean indicator. I did not want to have to decipher which component this data belonged to. It was already working so well, and the code looked super simple.

Take for example this bootstrap component, which I used as a way to display the serial number in a nicely formatted, well coloured manner.


For these I would use some callback tricks by defining a hidden input field. Then, on the initial load of the page, I would use the callback to assign this value to the rightful display:

dcc.Input(id=’Number’, type=’hidden’, value=’filler’)

The values coming in from the template would be matched to the id and value of the hidden input component, however this would invoke its callback. Which is how I got the serial number into a neat looking bootstrap badge. Using these same techniques, I could query for the machine type, so that each machine in the list would have its appropriate picture displayed.

@app.callback(Output(‘NumberBadge’, ‘children’), [Input(‘Number’, ‘value’)])

def value_to_children(val):

return val

A look at a few visual upgrades

Django allows you to define your own tags which are like little functions sprinkled in the html code. If you are unfamiliar with this, imagine a syntax close to Razor (using the @ symbol to declare you are doing some server processing). There were a lot of helpful articles that helped me along the way.

So after settling in with django-plotly-dash, OPCUA was the next package to configure for production.

At the time, I had only been looking at past data, information written to files an hour after the event had past. In order for my dashboards to fulfill an operator or factory manager ‘s needs, I needed to show how the machine was performing in an instance. Production personnel would want to see data as it changes in real-time.

After a lot of testing, here is how I wanted it to work;

The machines were already broadcasting their information using an OPCUA server on board the machine, but nothing was receiving on the other end. It was my job to create a subscription client to listen for all the changes in data within each endpoint. My plan was to create one connection managed by the web-application. It is from here I would create an additional cache for the page that was temporary. Every so often, I would have an external ‘watchdog’ class make a redundant snapshot during the machine’s run-time.

Useful UAExpert properties

UAExpert was a very useful program in order to test connections and help discover node namespaces. It provides a GUI and allows you to better organize your connected systems. With the help of UAExpert, I could navigate to the nodes I needed namespaces for and call them directly. With code, it looked something like this;

from opcua import Client, Subscription, ua

client = Client(input_ip_address) #Address in the form of opc.tcp//IP:Port

Create a client object using the Client class provided.

sub_handler = SubHandler()


server_node = client.get_server_node() #optional

gen_subscription = client.create_subscription(1000, sub_handler)

this_node = client.get_node(“ns=3;s=::NODE_NAME”) #string notation of the node


Create a SubHandler class which defines a datachange_notification method, then instantiate a subscription object using that handle. Create and return a subscription object using create_subscription. Use the client’s get_node method to make node objects, and then call the subscription’s subscribe_data_change method to attach this node to the handle.

Within this method, use any means to capture the data. I used an if statement block to determine the node names that were being changed, and then saved those variables to a dictionary with keys pertaining to that node name.

Note: OPCUA forewarns you not to use any slow or network operations as this could lead to a build up of events in the stream.

And so now I had real-time data entering the dashboard. Using the callback syntax I had used earlier, I could pick up changes in the data made to my dictionary, and use this as a sort of cache for the view.

The next step was to put a bit of meaning behind all this data, and give it some stylish looks.


by Adam Pucciano at Fri May 15 2020 14:31:51 GMT+0000 (Coordinated Universal Time)

OMG! Ubuntu

Ubuntu Touch Demoed on the PineTab Linux Tablet [Video]

Watch Ubuntu Touch running on a PineTab in this short video shared by Pine64, the company behind the (upcoming) $89 Linux tablet with dockable keyboard.

This post, Ubuntu Touch Demoed on the PineTab Linux Tablet [Video] is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Fri May 15 2020 14:11:06 GMT+0000 (Coordinated Universal Time)

Yoosuk Sim

Debugging with GCC: GIMPLE


One of the very first thing GCC asks the GSoC applicants to do, even before writing the application, is to try various different debugging techniques using GCC. I was personally familiar with the basic, compile-with-g-flag-and-use-gdb method. Turns out, there's more: GIMPLE.

A Simple but Non-trivial Program

Problem Description

The instruction asks to compile a simple but non-trivial program with some flags that generates debugging information:
-O3 -S -fdump-tree-all -fdump-ipa-all -fdump-rtl-all
. Because I was keep reading on ways to debug GCC just prior to the statement, I immediately thought "GCC" and tried
make -j8 CXXFLAGS="-O3 -S -fdump-tree-all -fdump-ipa-all -fdump-rtl-all"
. This was a mistake: turns out, GCC can't be compiled with those flags. Thankfully, GCC developers have a very active IRC channel for me to signal SOS.


were quick to respond to my call for help.

jakub: it isn't meant that you should build gcc with those flags, you should pick some source and compile that with the newly built gcc with those flags
jakub: and look at the files it produces
jakub: the dumps show you what gcc has been doing in each pass, so e.g. when one is looking at a wrong-code bug, one can look at which dump shows the bad code first and debug the corresponding pass
jakub: another common method (at least recently) is, if looking e.g. at a wrong-code regression where some older gcc version worked fine and current doesn't
jakub: to bisect which gcc commit changed the behavior and from diffing the dumps with gcc immediately before that change and after find out what changed and try to understand why
segher:where "recently" is ten or so years :-)
segher:(but diffing dump files isn't great still)
So, the above flags provide even more depth of understanding on what is happenning from the compiler's perspective. Digging around in GCC Developer Options documentation and the gcc
output, I found what some of the flags were for:
  • -S
    : Stop after the stage of compilation proper; do not assemble
  • -fdump-tree-all
    : Control the dumping at various stages of processing the intermediate language tree to a file. In this case, all stages.
  • -fdump-ipa-all
    : Control the dumping at various stages of inter-procedural analysis language tree to a file. In this case, all inter-procedural analysis dump.
  • -fdump-rtl-all
    : Says to make debugging dumps during compilation, too big a list to repeat it here
This adds a whole new depth of information I hadn't imagined before. I dusted out my old assignments from OpenMP class and decided to give it a spin.

Dusting out my old assignments

The assignment was a simple affair, comparing efficiencies of various different features of OpenMP addressing one particular problem: reduction. I decided to take a look particularly at worksharing example: it had some single-thread operations as well as several different OpenMP operations, and I hoped that would give me a glimps at various different formation of output. Since my ultimate goal would be to work on OMPD, brushing up on OpenMP context seemed logical. My source code was all in the solitary
file, so I issued the compilation command:
g++ -std=c++17 -fopenmp -O3 -S -fdump-tree-all -fdump-ipa-all -fdump-rtl-all
. No errors. Instead, I ended up with 225 dump files, written in an intermediate code language called GIMPLE.


GIMPLE, as defined in GIMPLE documentation, is "a three-address representation derived from GENERIC by breaking down GENERIC expressions into tuples of no more than 3 operands (with some exceptions like function calls)." My shallow understanding and assumption is that GIMPLE is language and architecture independent, which sounds similar to Java bytecode idea, although the latter is probably very dependent to JVM as the target architech. It is also here where many of the optimization takes place. Since GIMPLE is intermediary code to all languages supported by GCC, and to all architecture, this means same optimization done on GIMPLE would affect all languages and architectures.

Files of interests

I could not dare read all 255 files. Perhaps some day, but it would have overwhelmed me. Besides, it seems like each file is evolution from another, making them look very similar to each other with specific tweaks applied on each step. That said, I was immediately attracted to
, which seems to be the point where the code was translated to GIMPLE, as well as
that has lower level GIMPLE methods specific to OpenMP, and
, each with different optimization. More studying to do.

Going forward

I am going to learn more about GIMPLE, and try to understand the OpenMP portion of it more in-depth. I should also start reading up on OMPD documentation to find some correlation to link the two projects together. This is exciting for me, and I can't wait to take the next step.

by Yoosuk Sim at Fri May 15 2020 00:30:34 GMT+0000 (Coordinated Universal Time)

Thursday, May 14, 2020


Request for comment: how to collaboratively make trustworthy AI a reality

A little over a year ago, I wrote the first of many posts arguing: if we want a healthy internet — and a healthy digital society — we need to make sure AI is trustworthy. AI, and the large pools of data that fuel it, are central to how computing works today. If we want apps, social networks, online stores and digital government to serve us as people — and as citizens — we need to make sure the way we build with AI has things like privacy and fairness built in from the get go.

Since writing that post, a number of us at Mozilla — along with literally hundreds of partners and collaborators — have been exploring the questions: What do we really mean by ‘trustworthy AI’? And, what do we want to do about it?

 How do we collaboratively make trustworthy AI a reality? 

Today, we’re kicking off a request for comment on  v0.9 of Mozilla’s Trustworthy AI Whitepaper — and on the accompanying theory of change diagram that outlines the things we think need to happen. While I have fallen out of the habit, I have traditionally included a simple diagram in my blog posts to explain the core concept I’m trying to get at. I would like to come back to that old tradition here:

This cartoonish drawing gets to the essence of where we landed in our year of exploration: ‘agency’ and ‘accountability’ are the two things we need to focus on if we want the AI that surrounds us everyday to be more trustworthy. Agency is something that we need to proactively build into the digital products and services we use — we need computing norms and tech building blocks that put agency at the forefront of our design process. Accountability is about having effective ways to react if things go wrong — ways for people to demand better from the digital products and services we use everyday and for governments to enforce rules when things go wrong. Of course, I encourage  you to look at the full (and fancy) version of our theory of change diagram — but the fact that ‘agency’ (proactive) and ‘accountability’ (reactive) are the core, mutually reinforcing parts of our trustworthy AI vision is the key thing to understand.

In parallel to developing our theory of change, Mozilla has also been working closely with partners over the past year to show what we mean by trustworthy AI, especially as it relates to consumer internet technology. A significant portion of our 2019 Internet Health Report was dedicated to AI issues. We ran campaigns to: pressure platforms like YouTube to make sure their content recommendations don’t promote misinformation; and call on Facebook and others to open up APIs to make political ad targeting more transparent. We provided consumers with a critical buying guide for AI-centric smart home gadgets like Amazon Alexa. We invested ~$4M in art projects and awarded fellowships to explore AI’s impact on society. And, as the world faced a near universal health crisis, we asked  questions about how issues like AI, big data and privacy will play during — and after — the pandemic. As with all of Mozilla’s movement building work, our intention with our trustworthy AI efforts is to bias towards action and working with others.

A request for comments

It’s with this ‘act + collaborate’ bias in mind that we are embarking on a request for comments on v0.9 of the Mozilla Trustworthy AI Whitepaper. The paper talks about how industry, regulators and citizens of the internet can work together to build more agency and accountability into our digital world. It also talks briefly about some of the areas where Mozilla will focus, knowing that Mozilla is only one small actor in the bigger picture of shifting the AI tide.

Our aim is to use the current version of this paper as a foil for improving our thinking and — even more so — for identifying further opportunities to collaborate with others in building more trustworthy AI. This is why we’re using the term ‘request for comment’ (RFC). It is a very intentional hat tip to a long standing internet tradition of collaborating openly to figure out how things should work. For decades, the RFC process has been used by the internet community to figure out everything from standards for sharing email across different computer networks to best practices for defeating denial of service attacks. While this trustworthy AI effort is not primarily about technical standards (although that’s part of it), it felt (poetically) useful to frame this process as an RFC aimed at collaboratively and openly figuring out how to get to a world where AI and big data work quite differently than they do today.

We’re imagining that Mozilla’s trustworthy AI request for comment process includes three main steps, with the first step starting today.

Step 1: partners, friends and critics comment on the white paper

During this first part of the RFC, we’re interested in: feedback on our thinking; further examples to flesh out our points, especially from sources outside Europe and North America; and ideas for concrete collaboration.

The best way to provide input during this part of the process is to put up a blog post or some other document reacting to what we’ve written (and then share it with us). This will give you the space to flesh out your ideas and get them in front of both Mozilla (send us your post!) and a broader audience. If you want something quicker, there is also an online form where you can provide comments. We’ll be holding a number of online briefings and town halls for people who want to learn about and comment on the content in the paper — sign up through the form above to find out more. This phase of the process starts today and will run through September 2020.

Step 2: collaboratively map what’s happening — and what should happen 

Given our focus on action, mapping out real trustworthy AI work that is already happening — and that should happen — is even more critical than honing frameworks in the white paper. At a baseline, this means collecting information about educational programs, technology building blocks, product prototypes, consumer campaigns and emerging government policies that focus on making trustworthy AI a reality.

The idea is that the ‘maps’ we create will be a resource for both Mozilla and the broader field. They will help Mozilla direct its fellows, funding and publicity efforts to valuable projects. And, they will help people from across the field see each other so they can share ideas and collaborate completely independently of our work.

Process-wise, these maps will be developed collaboratively by Mozilla’s Insights Team with involvement of people and organizations from across the field. Using a mix of feedback from the white paper comment process (step 1) and direct research, they will develop a general map of who is working on key elements of trustworthy AI. They will also develop a deeper landscape analysis on the topic of data stewardship and alternative approaches to data governance. This work will take place from now until November 2020.

Step 3: do more things together, and update the paper

The final — and most important — part of the process will be to figure out where Mozilla can do more to support and collaborate with others. We already know that we want to work more with people who are developing new approaches to data stewardship, including trusts, commons and coops. We see efforts like these as foundational building blocks for trustworthy AI. Separately, we also know that we want to find ways to support African entrepreneurs, researchers and activists working to build out a vision of AI for that continent that is independent of the big tech players in the US and China. Through the RFC process, we hope to identify further areas for action and collaboration, both big and small.

Partnerships around data stewardship and AI in Africa are already being developed by teams within Mozilla. A team has also been tasked with identifying smaller collaborations that could grow into something bigger over the coming years. We imagine this will happen slowly through suggestions made and patterns identified during the RFC process. This will then shape our 2021 planning — and will feed back into a (hopefully much richer) v1.0 of the whitepaper. We expect all this to be done by the end of 2020.

Mozilla cannot do this alone. None of us can

As noted above: the task at hand is to collaboratively and openly figure out how to get to a world where AI and big data work quite differently than they do today. Mozilla cannot do this alone. None of us can. But together we are much greater than the sum of our parts. While this RFC process will certainly help us refine Mozilla’s approach and direction, it will hopefully also help others figure out where they want to take their efforts. And, where we can work together. We want our allies and our community not only to weigh in on the white paper, but also to contribute to the collective conversation about how we reshape AI in a way that lets us build — and live in — a healthier digital world.

PS. A huge thank you to all of those who have collaborated with us thus far and who will continue to provide valuable perspectives to our thinking on AI.

The post Request for comment: how to collaboratively make trustworthy AI a reality appeared first on The Mozilla Blog.

by Mozilla at Thu May 14 2020 19:41:01 GMT+0000 (Coordinated Universal Time)

Wednesday, May 13, 2020


Welcome Adam Seligman, Mozilla’s new Chief Operating Officer

I’m excited to announce that Adam Seligman has joined Mozilla as our new Chief Operating Officer. Adam will work closely with me to help scale our businesses, growing capabilities, revenue and impact to fulfill Mozilla’s mission in service to internet users around the world.

Our goal at Mozilla is to build a better internet. To provide products and services that people flock to, and that elevate a safer, more humane, less surveillance and exploitation-based reality. To do this —  especially now — we need to engage with our customers and other technologists; ideate, test, iterate and ship products; and develop revenue sources faster than we’ve ever done.

Adam has a proven track record of building businesses and communities in the technology space. With a background in computer science, Adam comes to Mozilla with nearly two decades of experience in our industry. He managed a $1B+ cloud platform at Salesforce, led developer relations at Google and was a key member of the web platform strategy team at Microsoft.

Adding Adam to our team will accelerate our ability to solve big problems of online life, to create product offerings that connect to consumers, and to develop revenue models in ways that align with our mission. Adam is joining Mozilla at a time when people are more reliant than ever on the internet, but also questioning the impact of technology on their lives. They are looking for leadership and solutions from organizations like Mozilla. Adam will help grow Mozilla’s capacity to offer a range of products and services that meet people’s needs for online life.

“The open internet has brought incredible changes to our lives, and what we are witnessing now is a massive acceleration,” said Adam Seligman, Mozilla’s new Chief Operating Officer. “The open internet is keeping our industries running and our children learning. It’s our lifeline to family and friends, and it’s our primary source of information. It powers everything from small business to social movements. I want to give back to an internet that works for people — not against them. And there is no better champion for a people-first vision of the internet than Mozilla.”

In his capacity as Chief Operating Officer, Adam will lead the Pocket, Emerging Technologies, Marketing and Open Innovation teams to accelerate product growth and profitable business models, and work in close coordination with Dave Camp and the Firefox organization to do the same.

I eagerly look forward to working together with Adam to navigate these troubled times and build a vibrant future for Mozilla’s product and mission.

The post Welcome Adam Seligman, Mozilla’s new Chief Operating Officer appeared first on The Mozilla Blog.

by Mozilla at Wed May 13 2020 16:00:07 GMT+0000 (Coordinated Universal Time)

Tuesday, May 12, 2020


What the heck happened with .org?

If you are following the tech news, you might have seen the announcement that ICANN withheld consent for the change of control of the Public Interest Registry and that this had some implications for .org.  However, unless you follow a lot of DNS inside baseball, it might not be that clear what all this means. This post is intended to give a high level overview of the background here and what happened with .org. In addition, Mozilla has been actively engaged in the public discussion on this topic; see here for a good starting point.

The Structure and History of Internet Naming

As you’ve probably noticed, Web sites have names like “”, “”, etc. These are called “domain names.” The way this all works is that there are a number of “top-level domains” (.org, .com, .io, …) and then people can get names within those domains (i.e., that end in one of those). Top level domains (TLDs) come in two main flavors:

  • Country-code top-level domains (ccTLDs) which represent some country or region like .us (United States), .uk (United Kingdom, etc.)

Back at the beginning of the Internet, there were five gTLDs which were intended to roughly reflect the type of entity registering the name:

  • .com: for “commercial-related domains”
  • .edu: for educational institutions
  • .gov: for government entities (really, US government entities)
  • .mil: for the US Military (remember, the Internet came out of US government research)
  • .org: for organizations (“any other domains”)

It’s important to remember that until the 90s, much of the Internet ran under an Acceptable Use Policy which discouraged/forbid commercial use and so these distinctions were inherently somewhat fuzzy, but nevertheless people had the rough understanding that .org was for non-profits and the like and .com was for companies.

During this period the actual name registrations were handled by a series of government contractors (first SRI and then Network Solutions) but the creation and assignment of the top-level domains was under the control of the Internet Assigned Numbers Authority (IANA), which in practice, mostly meant the decisions of its Director, Jon Postel. However, as the Internet became bigger, this became increasingly untenable especially as IANA was run under a contract to the US government. Through a long and somewhat complicated series of events, in 1998 this responsibility was handed off to the Internet Corporation for Assigned Names and Numbers (ICANN), which administers the overall system, including setting the overall rules and determining which gTLDs will exist (which ccTLDs exist is determined by ISO 3166-1 country codes, as described in RFC 1591). ICANN has created a pile of new gTLDs, such as .dev, .biz, and .wtf (you may be wondering whether the world really needed .wtf, but there it is). As an aside, many of the newer names you see registered are not actually under gTLDs, but rather ccTLDs that happen to correspond to countries lucky enough to have cool sounding country codes. For instance, .io is actually the British Indian Ocean’s TLD and .tv belongs to Tuvalu.

One of the other things that ICANN does is determine who gets to run each TLD. The way this all works is that ICANN determines who gets to be the registry, i.e., who keeps the records of who has which name as well as some of the technical data needed to actually route name lookups. The actual work of registering domain names is done by a registrar, who engages with the customer. Importantly, while registrars compete for business at some level (i.e., multiple people can sell you a domain in .com), there is only one registry for a given TLD and so they don’t have any price competition within that TLD; if you want a .com domain, VeriSign gets to set the price floor. Moreover, ICANN doesn’t really try to keep prices down; in fact, they recently removed the cap on the price of .org domains (bringing it in line with most other TLDs). One interesting fact about these contracts is that they are effectively perpetual: the contracts themselves are for quite long terms and registry agreements typically provide for automatic renewal except under cases of significant misbehavior by the registry. In other words, this is a more or less permanent claim on the revenues for a given TLD.

The bottom line here is that this is all quite lucrative. For example, in FY19 VeriSign’s revenue was over $1.2 billion. ICANN itself makes money in two main ways. First, it takes a cut of the revenue from each domain registration and second it auctions off the contracts for new gTLDs if more than one entity wants to register them. In the fiscal year ending in June 2018, ICANN made $136 million in total revenues (it was $302 million the previous year due to a large amount of revenue from gTLD auctions).

ISOC and .org

This brings us to the story of ISOC and .org. Until 2003, VeriSign operated .com, .net, and .org, but ICANN and VeriSign agreed to give up running .org (while retaining the far more profitable .com). As stated in their proposal:

As a general matter, it will largely eliminate the vestiges of special or unique treatment of VeriSign based on its legacy activities before the formation of ICANN, and generally place VeriSign in the same relationship with ICANN as all other generic TLD registry operators. In addition, it will return the .org registry to its original purpose, separate the contract expiration dates for the .com and .net registries, and generally commit VeriSign to paying its fair share of the costs of ICANN without any artificial or special limits on that responsibility.

The Internet Society (ISOC) is a nonprofit organization with the mission to support and promote “the development of the Internet as a global technical infrastructure, a resource to enrich people’s lives, and a force for good in society”. In 2002, they submitted one of 11 proposals to take over as the registry for .org and ICANN ultimately selected them. ICANN had a list of 11 criteria for the selection and the board minutes are pretty vague on the reason for selecting ISOC, but at the time this was widely understood as ICANN using the .org contract to provide a subsidy for ISOC and ISOC’s work. In any case, it ended up being quite a large subsidy: in 2018, PIR’s revenue from .org was over $92 million.

The actual mechanics here are somewhat complicated: it’s not like ISOC runs the registry itself. Instead they created a new non-profit subsidiary, the Public Interest Registry (PIR), to hold the contract with ICANN to manage .org. PIR in turn contracts the actual operations to Afilias, which is also the registry for a pile of other domains in their own right. [This isn’t an uncommon structure. For instance, VeriSign is the registry for .com, but they also run .tv for Tuvalu.] This will become relevant to our story shortly. Additionally, in the summer of 2019, PIR’s ten year agreement with ICANN renewed, but under new terms: looser contractual conditions to mirror those for the new gTLDs (yes, including .wtf), including the removal of a price cap and certain other provisions.

The PIR Sale

So, by 2018, ISOC was sitting on a pretty large ongoing revenue stream in the form of .org registration fees. However, ISOC management felt that having essentially all of their funding dependent on one revenue source was unwise and that actually running .org was a mismatch with ISOC’s main mission. Instead, they entered into a deal to sell PIR (and hence the .org contract) to a private equity firm called Ethos Capital, which is where things get interesting.

Ordinarily, this would be a straightforward-seeming transaction, but under the terms of the .org Registry Agreement, ISOC had to get approval from ICANN for the sale (or at least for PIR to retain the contract):

7.5              Change of Control; Assignment and Subcontracting.  Except as set forth in this Section 7.5, neither party may assign any of its rights and obligations under this Agreement without the prior written approval of the other party, which approval will not be unreasonably withheld.  For purposes of this Section 7.5, a direct or indirect change of control of Registry Operator or any subcontracting arrangement that relates to any Critical Function (as identified in Section 6 of Specification 10) for the TLD (a “Material Subcontracting Arrangement”) shall be deemed an assignment.

Soon after the proposed transaction was announced, a number of organizations (especially Access Now and EFF) started to surface concerns about the transaction. You can find a detailed writeup of those concerns here but I think a fair summary of the argument is that .org was special (and in particular that a lot of NGOs relied on it) and that Ethos could not be trusted to manage it responsibly. A number of concerns were raised, including that Ethos might aggressively raise prices in order to maximize their profit or that they could be more susceptible to governmental pressure to remove the domain names of NGOs that were critical of them. You can find Mozilla’s comments on the proposed sale here. The California Attorney General’s Office also weighed in opposing the sale in a letter that implied it might take independent action to stop it, saying:

This office will continue to evaluate this matter, and will take whatever action necessary to protect Californians and the nonprofit community.

In turn, Ethos and ISOC mounted a fairly aggressive PR campaign of their own, including creating a number of new commitments intended to alleviate concerns that had been raised, such as a new “Stewardship Council” with some level of input into privacy and policy decisions, an amendment to the operating agreement with ICANN to provide for additional potential oversight going forward, and a promise not to raise prices by more than 10%/year for 8 years. At the end of the day these efforts did not succeed: ICANN announced on April 30 that they would withhold consent for the deal (see here for their reasoning).

What Now?

As far as I can tell, this decision merely returns the situation to the status quo ante (see this post by Milton Mueller for some more detailed analysis). In particular, ISOC will continue to operate PIR and be able to benefit from the automatic renewal (and the agreement runs through 2029 in any case). To the extent to which you trusted PIR to manage .org responsibly a month ago, there’s no reason to think that has changed (of course, people’s opinions may have changed because of the proposed sale). However, as Mueller points out, none of the commitments that Ethos made in order to make the deal more palatable apply here, and in particular, thanks to the new contract in 2019, PIR ISOC is free to raise prices without being bound by the 10% annual commitment that Ethos had offered.

It’s worth noting that “Save dot Org” at least doesn’t seem happy to leave .org in the hands of ISOC and in particular has called for ICANN to rebid the contract. Here’s what they say:

This is not the final step needed for protecting the .Org domain. ICANN must now open a public process for bids to find a new home for the .Org domain. ICANN has established processes and criteria that outline how to hold a reassignment process. We look forward to seeing a competitive process and are eager to support the participation in that process by the global nonprofit community.

For ICANN to actually try to take .org away from ISOC seems like it would be incredibly contentious and ICANN hasn’t given any real signals about what they intend to do here. It’s possible they will try to rebid the contract (though it’s not clear to me exactly whether the contract terms really permit this) or that they’ll just be content to leave things as they are, with ISOC running .org through 2029.

Regardless of what the Internet Society and ICANN choose to do here, I think that this has revealed the extent to which the current domain name ecosystem depends on informal understandings of what the various actors are going to do, as opposed to formal commitments to do them. For instance, many opposed to the sale seem to have expected that ISOC would continue to manage .org in the public interest and felt that the Ethos sale threatened that. However, as a practical matter the registry agreement doesn’t include any such obligation and in particular nothing really stops them from raising prices much higher in order to maximize profit as opponents argued Ethos might do (although ISOC’s nonprofit status means they can’t divest those profits directly). Similarly, those who were against the sale and those who were in favor of it seem to have had rather radically different expectations about what ICANN was supposed to do (actively ensure that .org be managed in a specific way versus just keep the system running with a light touch) and at the end of the day were relying on ICANN’s discretion to act one way or the other. It remains to be seen whether this is an isolated incident or whether this is a sign of a deeper disconnect that will cause increasing friction going forward.

The post What the heck happened with .org? appeared first on The Mozilla Blog.

by Mozilla at Tue May 12 2020 00:19:27 GMT+0000 (Coordinated Universal Time)

Monday, May 11, 2020

OMG! Ubuntu

Noted is keyboard-driven note taking app for macOS & Linux

Noted is a new keyboard-driven note-taking app freely available for Linux and macOS. The app is inspired by open source Mac app 'Notational Velocity'.

This post, Noted is keyboard-driven note taking app for macOS & Linux is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Mon May 11 2020 17:59:37 GMT+0000 (Coordinated Universal Time)

Adam Pucciano

Python Series: First Approach

This is a continuation from my Python series. Check out the introduction and Part 1 where I explain all the tools used for this project here.

Part 2: First Approach

My first approach at Python started with Pandas. Reading as much as I could, and looking at the useful ‘cookbooks‘ the community had posted was a tremendous help. It also helped a lot getting sidetracked in my spare time with Pythons plethora of very easy to use image classification packages. When things are easy to use, its makes them super fun. Python for me was starting to become just that.

I struggled a lot at first – not so much getting used to the syntax, but with the confidence that the code I would write would run properly, it was a different script than what I was used to writing. It was a very short program, but a lot of the heavy lifting was done by Pandas. More and more I kind of fell in love.

I started with the CSV files that my original C# code would generate. Pandas had a nifty method that would create Dataframes from these types of files. Data frames are where you want to start your data processing. These objects have very useful behaviors that allow the programmer to manipulate data with ease. Organized in a matrix, they can be queried, merged, and filtered like SQL,  but can also be iterated through like a list meaning that any mathematical operations on the data set become much easier to perform. This also cuts out much of the code base that I needed to maintain and allowed my to focus my efforts on coding features that used the data in creative ways. I started to realize why Python was branded as such a great tool for data scientists.

An example of this can be seen in the report outlining the triggered stop alarms. Usually, this report would show when alarm events had halted production that was indexed with a timestamp. It would indicate which alarm code was triggered and for how long this alarm stayed present until it was cleared by an operator. Using Matrices, I not only could display the alarms on a time scale, but I could aggregate the data and figure out which alarms were the triggered the most. Furthermore, I could continue to group the data by day, index the result based on type of alarm that was triggered, and sum the total duration for that particular event. An analyst could now clearly see for which days the alarms were triggered the most, not just an overall average.

An early version of the alarm report system, graphed on the same timeline as efficiency data
Using the built-in interactive nature of the graphs allows users to specify the data they need.

The biggest advantage is that I could put my trust in the integrity of this library, which eliminated a need for me to validate results. Since this was a one man operation, I wanted to find more libraries that I could leverage like this.

This is exactly what I had found with plotly, and dash. Both of these libraries worked hand in hand as they were developed by the same group of individuals, but to no surprise plotly also worked out of the box with Dataframes. This again proved to be a critical moment for productivity as I could continue to put my trust into these libraries to get what I needed in development. I urge any programmer to do the same, especially in an experimental phase. Do not be hesitant to try new libraries!

So began my new plotly-dash program which took in my already processed files and created interactive graphs which could prove useful to a user. An issue continued to remain however: the whole process still felt very segregated. The files would have to be processed first (in C#) and then used in the Python script. This created a whole bunch of problems and loopholes that was not very cohesive, and turned into a very big mess when I tried to fill the gaps.

In about a day, I managed to rewrite my whole C# application that I made previously as a single Python script which I simply called ‘’. With the help of numpy – a common Python library, I could define a datatype that exactly matches my schema and tell a file reading method how to traverse each of the various binary files by describing their chunk size. The process looked something like this:

import numpy as np

import pandas as pd

Define a structured block;

exampleType = np.dtype({

‘names’: effi_names,

‘offsets’:[1, 25, 29, 33],

‘formats’:[‘a20, ‘f4’, ‘i4’, ‘u4’],

‘itemsize’: 55


Read Files given my datatypes with numpy’s fromfile method;

df_toReturn = pd.DataFrame(np.fromfile(os.getcwd() + filename, exampleType))

That’s all it took to do it in Python! It really helped having the C# version after all, as I could run both and I could verify the results using my functional Windows program. I guess that was some advantage to have first programmed the logic in a more familiar language. I knew when I had an incorrect offset, or an itemsize made a misalignment in the file just by looking at the outputs. Using the fromfile method, the rest of the procedure was the same, and I was able to add all the datatypes. Excluding some logic to decode the byte results to utf-8, the amount of actual coding would exactly follow this procedure.

It became a utility class, a script that no longer needed to be tied to the Windows platform. It would take in binary files, and spit out a data frame directly, or CSV files depending on the context. Generously around 300 lines of code (with comments), the new binary file reader felt more portable and better suited for the job. I felt better prepared for future additions even if the end result felt a bit more ‘hacked’ together as much of the logic in this script was taken from parts I’ve read and followed in the python examples section. During the development phase it proved to be a reliable script and more maintainable than its C# counterpart.

As a result, my initial C# program became obsolete, but not without learning a few lessons. For one, I learned how to read binary data into a struct using C#, but more importantly I learned not to be afraid of learning and producing software at the same time. To break out of my comfort zone, and use the best tool for the job, instead of fixating on using unfit tools for a new project. Not to say I am any expert in C# either, perhaps one day I will have to revisit this .

Still rough around the edges

While not much had changed from the user’s perspective, I knew that this substitution was a surmountable upgrade. I could now directly take binary files from machines, and turn them into figures which the user could interact with and possibly extrapolate meaning. For instance, one could compare down times with alarm or mold change events. There was no need for the middle ware to do any conversions or pre-processing, and at this point exporting the files to a CSV format was left as an option rather than the defining feature. As a bonus, this eliminated the need for programming logic that would handle saving or overwriting the converted files and their respective directories.

This program was still a little rough around the edges, like how it required a repository of folders and files to look at while running along with a few other things that made it rugged. I started to gather all the short comings of the new program and added in what I could for quality of life or visuals. I began planning my internal improvements. I needed to continue my direction in automating the experience of ‘looking at machine data’. I know the exterior ‘flare’ would have to come after, but at least for now I had a pretty flexible and reliable core to work with. I would go on to prototype several other libraries on their own while I did some usability testing with my newest Python interface.



by Adam Pucciano at Mon May 11 2020 17:54:11 GMT+0000 (Coordinated Universal Time)

Friday, May 8, 2020

OMG! Ubuntu

Ready, Set, Bake: Ubuntu 20.04 LTS is Now Certified for the Raspberry Pi

Ubuntu 20.04 LTS is now certified for the Raspberry Pi. The support for the successful single board computer includes additional testing and security fixes.

This post, Ready, Set, Bake: Ubuntu 20.04 LTS is Now Certified for the Raspberry Pi is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Fri May 08 2020 12:48:24 GMT+0000 (Coordinated Universal Time)

Calvin Ho

Typescript + Linters

Taking a small break from Telescope until the summer semester resumes. I've started collaborating with a elementary school friend on a project to build a clone of the game Adventure Capitalist. After working with Javascript for so long, I decided to try doing this in Typescript. It went pretty well up until I had the following line of code:

const index = this.shops.findIndex((shop: Shop) => == shopName);

When I was trying to compile my code, I kept getting the following error

Property 'findIndex' does not exist on type 'Shop[]' 

Pretty sure this should work as shops is an array of type Shop. As a developer usually does when they run into issues, I started googling the problem and checking Stack Overflow. It recommended I change my tsconfig.json "target" to es2015 and the findIndex() is an es6 function and add es6 to "lib". I did all that and tried compiling, still no good. I reached out to my frequent collaborator from Telescope @manekenpix and he suggested I just try running the code. It works?

Turns out it was a linter issue, although it still compiled properly. Upon further research 2 hours later, I realized I was using the cli command wrong, or at least the way I was using it was going to cause errors. I was compiling my .ts to .js by using the command tsc index.ts instead of tsc, when a specific file name is used, it will disregard the tsconfig.json file settings and just try to compile your Typescript to Javascript. So I tried running 'tsc', it worked! No errors and it was outputting all the compiled .js files inside my /build folder (ignored in .gitignore) I specified in my tsconfig.json file.

by Calvin Ho at Fri May 08 2020 07:05:49 GMT+0000 (Coordinated Universal Time)

Thursday, May 7, 2020

Adam Pucciano

Creating real value with real-time dashboards in Python

I know I do not post enough programming content, so here’s the first of many upcoming entries about my experience with Python.

NIIGON is an injection molding manufacturing company, the place where I work professionally and dedicate my time. They create massive industrial grade machinery for plastics manufacturing. My responsibilities there are mostly development IT, which is amazing because it means I get to work on various types of programming projects: web portals, windows applications, open-software integration and building automation, NIIGON does it all from the ground up with a small team of on-site IT. It feels much like the freedom of a start-up, with the strong foundation of a Fortune 500 company. It’s actually a fantastic place to spend my time, and I have learned a lot.

Here is my chance to share a bit of what I do professionally with other programmers, and give some special ‘shutouts’ to all the frameworks and libraries I’ve been using.

Over the past couple of months I have been developing a portable dashboard for production machines that are in service. Using a communication connection to the machine, it simply pulls data from active nodes and displays some basic (but important) information on the screen. Cycle time, parts per hour, and efficiency are what I learned to be some of the main metrics for measuring machine effectiveness and health. This dashboard also displays the currently assigned job, job progress and any alarms that may stop the machine’s automatic cycling mode.

Without explaining further, I will let these screen-captures better describe what I have developed so far.

A live dashboard view of a machine running in production
An analysis view of a Machine’s OEE statistics


What I came up with is fairly standard stuff, but I would ultimately like to share my journey in creating this feature using Python; hopefully if someone out may be looking to make a similar project, they can find themselves here for assistance.

I anticipate this will be a long post; so I am deciding to dissect this write-up into a few partitions. In this series, I will give you all of the tools you require, how it all works together, and what’s in store for the future while also making some comments on my experience creating and prototyping this type of technology and some of  the hardships and learning curves that I had accumulated throughout this project.

Part 1: The pieces of the puzzle.

Major frameworks/libraries involved in this project:

  • Django-plotly-dash – for containing the web applications forked to work with dash
  • Free OPCUA – for communicating with OPCUA servers running on the equipment
  • Plotly-dash – for displaying dashboards and creating graphs or figures
  • pandas – handle data locally in an easy way with Data frames
  • json – of course, to move around objects or information in HTTP requests
  • pyodbc– for connections to SQL database

Notable contributes:

  • ftplib – standard library for handling ftp connections
  • numpy – can’t have pandas without a little bit of numpy
  • Jinja2 – template with tag syntax for easy HTML creation

This project all started with machine data. NIIGON (formerly Athena Automation) had been ahead of the curve, having a part of their system for monitoring its’ sensors and activity, collecting this data for years. An on board OPCUA server  was embedded on each machine, and when coupled with additional custom software, its function was to write bits of snapshot information to tiny binary files that were also stored on the operating system.

There was only one inconvenience, and it proved to be much more intracate than I first planned for. Having to get the information off the machines and into a usable format was of course necessary for all this to work. This would be key in order to make real value out of what had been recorded. Much of what today’s technologies is, revolves around data analytics and collecting Big data.

We (NIIGON) had much of the collecting (and recording) already finished. As mentioned,  the machine’s themselves would gather operations data that occurred in the field. At first I decided to make an application in C#, to help translate these files. It was a language that was more familiar to me after working with windows forms a lot, and it did the job quite well. Machine files could be read into CSV format, and further extrapolated in Excel using a little GUI program loaded with check box options and buttons. However this approach felt closed ended. The files would be in another format, and that would be the end of it. It was hardly a flashy app I had envisioned to show my children one day. But more importantly, I had to ask myself if this application fit the requirements and goals I set out to do, and would this application ultimately be usable and provide some real value.

My answer to this reflection was a quick “No”. In its current form, this application would not be adopted by another department to use for machine analytics. I thought about being the user for this GUI quite a lot and during testing it was hard to let go of my preconceived knowledge of how it works. While it worked good enough and did all the right things, the check options and functionality for file types seemed to be a bit more technical than I had anticipated for someone on a sales force team. Still pretty good for a first prototype though, but the focus on how the app was used was misguided. I wanted to make the experience even more effortless so that this app was enjoyable to use, and did much of the work for the user, instead of being dependent different programs to draw graphs or get information from the row data. I had to change my perspective on what the application provided to the user as a tool, and how it was actually used as a program. I concluded that all of the tool like features should just be automated – as part of what the application does or sets up as an environment for the user to work with. How one interacts with the application was starting to evolve. Instead of the user having to concern themselves with working on organizing the data, they should be working with the data to create meaning for it.

I wanted to automate this approach even further. All of my research began to point to Python. Namely, hits like Pandas, and numpy were the first I came accross which were libraries (packages) aimed at data scientists who needed a high level programming language to process a large amount of data. And that’s when my dive into Python truly began.

Thanks for reading – More to come!




by Adam Pucciano at Thu May 07 2020 18:11:33 GMT+0000 (Coordinated Universal Time)


Mozilla research shows some machine voices score higher than humans

This blog post is to accompany the publication of the paper Choice of Voices: A Large-Scale Evaluation of Text-to-Speech Voice Quality for Long-Form Content in the Proceedings of CHI’20, by Julia Cambre and Jessica Colnago from CMU, Jim Maddock from Northwestern, and Janice Tsai and Jofish Kaye from Mozilla. 

In 2019, Mozilla’s Voice team developed a method to evaluate the quality of text-to-speech voices. It turns out there was very little that had been done in the world of text to speech to evaluate voice for listening to long-form content — things like articles, book chapters, or blog posts. A lot of the existing work answered the core question ofcan you understand this voice?” So a typical test might use a syntactically correct but meaningless sentence, like “The masterly serials withdrew the collaborative brochure”, and have a listener type that in. That way, the listener couldn’t guess missed words from other words in the sentence. But now that we’ve reached a stage of computerized voice quality where so many voices can pass the comprehension test with flying colours, what’s the next step?

How can we determine if a voice is enjoyable to listen to, particularly for long-form content — something you’d listen to for more than a minute or two? Our team had a lot of experience with this: we had worked closely with our colleagues at Pocket to develop the Pocket Listen feature, so you can listen to articles you’ve saved, while driving or cooking. But we still didn’t know how to definitively say that one voice led to a better listening experience than another.

The method we used was developed by our intern Jessica Colnago during her internship at Mozilla, and it’s pretty simple in concept. We took one article, How to Reduce Your Stress in Two Minutes a Day, and we recorded each voice reading that article. Then we had 50 people on Mechanical Turk listen to each recording — 50 different people each time. (You can also listen to clips from most of these recordings to make your own judgement.). Nobody heard the article more than once. And at the end of the article, we’d ask them a couple of questions to check they were actually listening, and to see what they thought about the voice.

For example, we’d ask them to rate how much they liked the voice on a scale of one to five, and how willing they’d be to listen to more content recorded by that voice. We asked them why they thought that voice might be pleasant or unpleasant to listen to. We evaluated 27 voices, and here’s one graph which represents the results. (The paper has lots more rigorous analysis, and we explored various methods to sort the ratings, but the end results are all pretty similar. We also added a few more voices after the paper was finished, which is why there’s different numbers of voices in different places in this research.)

As you can see, some voices rated better than others. The ones at the left are the ones people consistently rated positively, and the ones at the right are the ones that people liked less: just as examples, you’ll notice that the default (American) iOS female voice is pretty far to the right, although the Mac default voice has a pretty respectable showing. I was proud to find that the Mozilla Judy Wave1 voice, created by Mozilla research engineer Eren Gölge, is rated up there along with some of the best ones in the field. It turns out the best electronic voices we tested are Mozilla’s voices and the Polly Neural voices from Amazon. And while we still have some licensing questions to figure out, making sure we can create sustainable, publicly accessible, high quality voices, it’s exciting to see that we can do something in an open source way that is competitive with very well funded voice efforts out there, which don’t have the same aim of being private, secure and accessible to all.

We found there were some generalizable experiences. Listeners were 54% more likely to give a higher experience rating to the male voices we tested than the female voices. We also looked at the number of words spoken in a minute. Generally, our results indicated that there is a “just right speed” in the range of 163 to 177 words per minute, and people didn’t like listening to voices that were much faster or slower than that.

But the more interesting result comes from one of the things we did at a pretty late stage in the process, which was to include some humans reading the article directly into a microphone. Those are the voices circled in red:

What we found was that some of our human voices were being rated lower than some of the robot voices. And that’s fascinating. That suggests we are at a point in technology, in society right now, where there are mechanically generated voices that actually sound better than humans. And before you ask, I listened to those recordings of human voices. You can do the same. Janice (the recording labelled Human 2 in the dataset) has a perfectly normal voice that I find pleasant to listen to. And yet some people were finding these mechanically generated voices better.

That raises a whole host of interesting questions, concerns and opportunities. This is a snapshot of computerized voices, in the last two years or so. Even since we’ve done this study, we’ve seen the quality of voices improve. What happens when computers are more pleasant to listen to than our own voices? What happens when our children might prefer to listen to our computer reading a story than ourselves?

A potentially bigger ethical question comes with the question of persuasion. One question we didn’t ask in this study was whether people trusted or believed the content that was read to them. What happens when we can increase the number of people who believe something simply by changing the voice that it is read in? There are entire careers exploring the boundaries of influence and persuasion; how does easy access to “trustable” voices change our understanding of what signals point to trustworthiness? The BBC has been exploring British attitudes to regional accents in a similar way — drawing, fascinatingly, from a study of how British people reacted to different voices on the radio in 1927. We are clearly continuing a long tradition of analyzing the impact of voice and voices on how we understand and feel about information.

The post Mozilla research shows some machine voices score higher than humans appeared first on The Mozilla Blog.

by Mozilla at Thu May 07 2020 15:36:01 GMT+0000 (Coordinated Universal Time)

OMG! Ubuntu

Ubuntu Dev Details Work Done to Make GNOME Shell Faster in 20.04

Ubuntu 20.04 LTS's improved GNOME Shell performance didn't arrive out of nowhere. Now, in a new forum post the Ubuntu dev behind the effort explains more.

This post, Ubuntu Dev Details Work Done to Make GNOME Shell Faster in 20.04 is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Thu May 07 2020 15:30:05 GMT+0000 (Coordinated Universal Time)

Corey James

GraphQL RealWorld API – TypeScript, JWT, MongoDB, Express and Node.js

Hello, Welcome to my blog! Following up on my most recent post “RealWorld API – TypeScript, JWT, MongoDB, Express and Node.js”. I have modified the REST API I made into a Graph QL API. What is GraphQL? GraphQL is a query language for an API. GraphQL allows front-end applications to have control to specify the …

Continue reading "GraphQL RealWorld API – TypeScript, JWT, MongoDB, Express and Node.js"

by Corey James at Thu May 07 2020 00:55:17 GMT+0000 (Coordinated Universal Time)

Wednesday, May 6, 2020

OMG! Ubuntu

Microsoft’s New Surface Book Ad Mentions …Linux?!

A new product video for the Microsoft Surface Book 3 make the ability to 'run Linux on Windows' a core selling point, cementing Microsoft's love for its former rival.

This post, Microsoft’s New Surface Book Ad Mentions …Linux?! is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Wed May 06 2020 23:28:43 GMT+0000 (Coordinated Universal Time)

Linux Marketshare Doubled Last Month, Stats Reveal

We've all had to adapt to different ways of working of late, and according to stat trackers NetMarketShare that includes making more use of Linux!

This post, Linux Marketshare Doubled Last Month, Stats Reveal is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Wed May 06 2020 22:51:48 GMT+0000 (Coordinated Universal Time)

14 projects chosen by GNOME for Google Summer of Code 2020

Improved desktop notifications and a Wayland-compatible battery testing tool are among the 14 projects selected by GNOME for this year's Google Summer of Code (GSoC).

This post, 14 projects chosen by GNOME for Google Summer of Code 2020 is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Wed May 06 2020 20:23:47 GMT+0000 (Coordinated Universal Time)

SoftMaker Office 2021 Hits Beta, is Free to Download (For Now)

A beta release of SoftMaker Office 2021 is available to download for free on Windows, macOS and Linux. For those unfamiliar with it SoftMaker Office is a paid, closed-source productivity suite created by SoftMaker, a […]

This post, SoftMaker Office 2021 Hits Beta, is Free to Download (For Now) is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

by OMG! Ubuntu at Wed May 06 2020 10:07:00 GMT+0000 (Coordinated Universal Time)

Steven Le

Angular Bootstrap Project Developments Part 6

Hello, Hello again and welcome back to the (supposed) last installment!

Today we’re going to be going through a short topic: Having a Google maps embed on a component of the website.

During this process of researching and implementation it was pretty easy. It doesn’t follow the conventional Google Maps API method but instead uses a third party in order to accomplish mapping. The main trouble I found was the styling that came from it which was overcome with iframe documentation. Let’s start the process.

Implementing a Google Maps Embed

The first thing I did was to get an embedded google maps link from this third party website. Insert the location of choice and set the width and height to your desired size. When you get the HTML Code it looks like a lot, it isn’t really as you’re going to be using only one part of it, the src. Here’s what it looks like:

Repeating what I said before we’re only really using this part of this long html code:

src="" frameborder="0" scrolling="no" marginheight="0" marginwidth="0"

Like the picture and the code above though, we’re also going to be using the iframe tag, I just cannot add it here as WordPress has trouble displaying HTML code.

Go through the process of creating a component as you did in the older parts. As a refresher just call:

 ng generate component insert-component-name-here

In my project I made my component a contact-info page. Now after that add it to you’re app-routing.module.ts by importing the component and route. It should look something like this:

import { NgModule } from '@angular/core';

import { Routes, RouterModule } from '@angular/router';

import { HomeComponent } from './home/home.component';

import { ProjectsComponent } from './projects/projects.component';

import { ContactInfoComponent} from './contact-info/contact-info.component';

import { NotfoundComponent } from './notfound/notfound.component';

const routes: Routes = [

  {path:  "", pathMatch:  "full",redirectTo:  "home"},

  {path: "home", component: HomeComponent},

  {path: "projects", component: ProjectsComponent},

  {path: "contact-info", component: ContactInfoComponent},

  {path: "404", component: NotfoundComponent},

  {path: "**", redirectTo: '404'}



  imports: [

    RouterModule.forRoot(routes, {useHash: true})


  exports: [RouterModule]


export class AppRoutingModule { }

Code link here.

Now onto the html code for this embed, insert the usual !DOCTYPE html, html and body tags to be used:

And like older parts I’m going to add divs for the website’s styling and copying over the same css.

In order for me to get this embed to fit properly and scale when the screen size changes we’re going to be using a bit of bootstrap. Specifically we’re going to be using their grid system, so under the website-background div, we’re going to be inserting 3 divs: div with class container, div with class row and div with class col-md-12. Each of these encapsulates the grid layout: container is the box we’re going to be working with, row is the row in the capsule we’re working with and col-md-12 is the size of the row which we’re working with. Bootstrap grids have a max row length of 12 so in this case it’s the entire row we’re putting this embed in.

Within these divs, I created a div class to handle the css and inserted an iframe tag with the source from above.

All in all it should look something like this:

Code Link here.

I opted to add another row with my information to be displayed under it but that’s optional.

Unlike the other components, this one has specific spacing on the container so I’ll add it here as well.

for the iframe styling and

for the background + header/footer padding.

Code link here.

Now that was simple, you have a Google map embed on your website. Thanks for reading!

Now that I’m finished what I had worked on for a little bit, I may come back to this project and improve aspects of it, maybe add more features but as it is, it is complete. If there are any changes I will add them as another part of this series, Thanks again!

Click here to go to the last part

Click here to go to the beginning of the project

by Steven Le at Wed May 06 2020 18:49:48 GMT+0000 (Coordinated Universal Time)


More on COVID Surveillance: Mobile Phone Location

Previously I wrote about the use of mobile apps for COVID contact tracing. This idea has gotten a lot of attention in the tech press — probably because there are some quite interesting privacy issues — but there is another approach to monitoring people’s locations using their devices that has already been used in Taiwan and Israel, namely mobile phone location data. While this isn’t something that people think about a lot, your mobile phone has to be in constant contact with the mobile system and the system can use that information to determine your location. Mobile phones already use network-based location to provide emergency location services and for what’s called assisted GPS, in which mobile-tower based location is used along with satellite-based GPS, but it can, of course, be used for services the user might be less excited about, such as real-time surveillance of their location. In addition to measurements taken from the tower, a number of mobile services share location history with service providers, for instance to provide directions in mapping applications or as part of your Google account.

If what you are trying to do is as much COVID surveillance as cheaply as possible, this kind of data has several big advantages over mobile phone apps. First, it’s already being collected, so you don’t need to get anyone to install an app. Second, it’s extremely detailed because it has everyone’s location and not just who they have been in contact with. The primary disadvantage of mobile phone location data is accuracy; in some absolute sense, assisted GPS is amazingly accurate, especially to those old enough to remember when handheld GPS was barely a thing, but generally we’re talking about accuracies to the scale of meters to tens of meters, which is not good enough to tell whether you have been in close contact with someone. This is still useful enough for many applications and we’re seeing this kind of data used for a number of anti-COVID purposes such as detecting people crowding in a given location, determining when people have broken quarantine and measuring bulk movements.

But of course, all of this is only possible because everyone is already carrying around a tracking device in their pocket all the time and they don’t even think about it. These systems just routinely log information about your location whether you downloaded some app or not, and it’s just a limitation of the current technology that that information isn’t precise down to the meter (and this kind of positioning technology has gotten better over time because precise localization of mobile devices is key to getting good performance). By contrast, nearly all of the designs for mobile contact tracing explicitly prioritize privacy. Even the centralized designs like BlueTrace that have the weakest privacy properties still go out of their way to avoid leaking information, mostly by not collecting it. So, for instance, if you test positive BlueTrace tells the government who you have been in contact with, if you aren’t exposed to Coronavirus the government doesn’t learn much about you1.

The important distinction to draw here is between policy controls to protect privacy and technical controls to protect privacy. Although the mobile network gets to collect a huge amount of data on you, this data is to some extent protected by policy: laws, regulations, and corporate commitments constraining how that data can be used2 and you have to trust that those policies will be followed. By contrast, the privacy protections in the various COVID-19 contact tracing apps are largely technical: they don’t rely on trusting the health authority to behave properly because the health authority doesn’t have the information in its hands in the first place. Another way to think about this is that technical controls are “rigid” in that they don’t depend on human discretion: this is obviously an advantage for users who don’t want to have to trust government, big tech companies, etc. but it’s also a disadvantage in that it makes it difficult to respond to new circumstances. For instance, Google was able to quickly take mobility measurements using stored location history because people were already sharing that with them, but the new Apple/Google contact tracing will require people to download new software and maybe opt-in, which can be slow and result in low uptake.

The point here isn’t to argue that one type of control is necessarily better or worse than another. In fact, it’s quite common to have systems which depend on a mix of these3. However, when you are trying to evaluate the privacy and security properties of a system, you need to keep this distinction firmly in mind: every policy control depends on someone or a set of someones behaving correctly, and therefore either requires that you trust them to do so or have some mechanism for ensuring that they in fact are.

  1. Except that whenever you contact the government servers for new TempIDs it learns something about your current location. 
  2. For instance, the United States Supreme Court recently ruled that the government requires a warrant to get mobile phone location records. 
  3. For instance, the Web certificate system, which but relies extensively on procedural but is increasingly backed up by technical safeguards such as Certificate Transparency

The post More on COVID Surveillance: Mobile Phone Location appeared first on The Mozilla Blog.

by Mozilla at Wed May 06 2020 18:01:16 GMT+0000 (Coordinated Universal Time)

Mozilla announces the first three COVID-19 Solutions Fund Recipients

In less than two weeks, Mozilla received more than 160 applications from 30 countries for its COVID-19 Solutions Fund Awards. Today, the Mozilla Open Source Support Program (MOSS) is excited to announce its first three recipients. This Fund was established at the end of March, to offer up to $50,000 each to open source technology projects responding to the COVID-19 pandemic.

VentMon, created by Public Invention in Austin, Texas, improves testing of open-source emergency ventilator designs that are attempting to address the current and expected shortage of ventilators.

The same machine and software will also provide monitoring and alarms for critical care specialists using life-critical ventilators. It is a simple inline device plugged into the airway of an emergency ventilator, that measures flow and pressure (and thereby volume), making sure the ventilator is performing to specification, such as the UK RVMS spec. If a ventilator fails, VentMon raises an audio and internet alarm. It can be used for testing before deployment, as well as ICU patient monitoring. The makers received a $20,000 award which enables them to buy parts for the Ventmon to support more than 20 open source engineering teams trying to build ventilators.

Based in the Bay Area, Recidiviz is a tech non-profit that’s built a modeling tool that helps prison administrators and government officials forecast the impact of COVID-19 on their prisons and jails. This data enables them to better assess changes they can make to slow the spread, like reducing density in prison populations or granting early release to people who are deemed to pose low risk to public safety.

It is impossible to physically distance in most prison settings, and so incarcerated populations are at dangerous risk of COVID-19 infection. Recidiviz’s tool was downloaded by 47 states within 48hrs of launch. The MOSS Committee approved a $50,000 award.

“We want to make it easier for data to inform everything that criminal justice decision-makers do,” said Clementine Jacoby, CEO and Co-Founder of Recidiviz. “The pandemic made this mission even more critical and this funding will help us bring our COVID-19 model online. Already more than thirty states have used the tool to understand where the next outbreak may happen or how their decisions can flatten the curve and reduce impact on community hospital beds, incarcerated populations, and staff.”

COVID-19 Supplies NYC is a project created by 3DBrooklyn, producing around 2,000 face shields a week, which are urgently needed in the city. They will use their award to make and distribute more face shields, using 3D printing technology and an open source design. They also maintain a database that allows them to collect requests from institutions that need face shields as well as offers from people with 3D printers to produce parts for the face shields. The Committee approved a $20,000 award.

“Mozilla has long believed in the power of open source technology to better the internet and the world,” said Jochai Ben-Avie, Head of International Public Policy and Administrator of the Program. “It’s been inspiring to see so many open source developers step up and collaborate on solutions to increase the capacity of healthcare systems to cope with this crisis.”

In the coming weeks Mozilla will announce the remaining winning applicants. The application form has been closed for now, owing to the high number of submissions already being reviewed.

The post Mozilla announces the first three COVID-19 Solutions Fund Recipients appeared first on The Mozilla Blog.

by Mozilla at Wed May 06 2020 13:59:19 GMT+0000 (Coordinated Universal Time)

Yoosuk Sim

Act 3 Scene 1

Wait, what?

The last post ended with completing Act1 Scene1, with hints to Act1 Scene2. Yeah, a lot happened since.

Goblin Camp

Some of the last work in the blog was about the Goblin Camp, a revival project of an abandoned source code, which in turn was inspired by a great game, Dwarf Fortress. Since then, I learned more about data structures, and object design patterns. With each enlightenment in programming, I was more and more aware why this code was abandoned multiple times by different groups. I became one of them. This doesn't mean my goal of marrying parallel programming with the great game concept is abandoned: ever since the last communication, I also took a course on GPU programming, and it is giving me new ideas and goals. It just would not be happening with the existing Goblin Camp. More on this in the future. And this concluded my Act 1.

How about Act2

My Act 2 began with my Coop placement at Fundserv. It was truly a learning experience, in the best sense of the word. I feel spoiled with experiences I only hope I may continue to experience as I continue my journey as a software developer. I was extremely lucky to have entered the company when it was going through a massive modernization process. New infrastructures were being set in place that allowed for a mature work-from-home environment; this played a pivotal role in the company's continued success during COVID-19 crisis. Not only that, I was placed with a team in charge of spearheading the standardization of the new software development methodology including creating a new CI/CD pipeline and splitting monolithic code base into multiple micro services, to name a few. My job was to learn, apply, and document, which gave me a wealth of hands-on interaction with multiple products that would later escalate to production. My code, in production. It also meant I would create knowledge transfer documents and prepare KT sessions for other developers to introduce the new methodology, although regrettably, due to COVID-19, the KT session was postponed past my contract period. Still, I gained knowledge enough to stand before other programmers to share it. I very much felt that I was part of a development community. I was growing up as a programmer. The completion of my two successful Coop semesters at Fundserv also meant the end of my Act 2.

Act 3 Scene 1: Google Summer of Code

Just as my Act 2 completed, I was fortunate enough to get accepted by Google Summer of Code 2020. I will be working with GCC to begin the implementation of OMPD. This would allow GDB to debug OpenMP code in a more sensible manner that better reflects the OpenMP standards. I am very excited to be working with C/C++ codes, and I look forward to writing more about it as I progress through the project.

by Yoosuk Sim at Wed May 06 2020 13:00:59 GMT+0000 (Coordinated Universal Time)