Here is one my of life-saving git command, that allowed me to retrieve lost commit easily in the past few years: Save this alias carefully in your . It will launch gitk displaying the dangling commits, ie, the commits that…
Tips of the Day: Do not try to install a package through the Package Manager on Synology’s DSM 6 under Safari. You will end up with the following error: Operation Failed. Use Chrome or Firefox. Work perfectly.
Here is the description of the plan I intend to implement to have a proper backup system. I currently live since too many years on the edge of a disaster, ie the loss of all my data in case of a big event such as house fire, water flooding, etc. I used to backup my data a lot, but each system requires a different software and methods, I have to back up Raspberry Pis’, Windows desktop and laptops, and a Mac. So I have a lot of USB Drive with out of date backups, and they are all on-site. In case of fire, everything will be lost.
This is about to change in the few weeks with my Multi-Tiers Backup Plan.
I had Filevault frozen for many weeks due to an unknown reason. It was stuck with the “Encryption paused” message. I finally found the way to unpause it and install macSierra.
There is a highly strategic thing to do when dealing with multi-developer and multi-branch project, especially if you indent to start automating the merge between branches: enforce a strict structure of the import statement in you Python script, and especially you import one and only one element per line. Here is why.
Some times, you can get an exception that occurs on a recurrent basis but really hard to reproduce. By the look and feel of such issues, and also after deep study of the logs, it can appear this is a very classic race condition case, where you have several threads that want to access to a data while this data is being changed by someone else.
The obvious answer to such issue is to implement a Lock, where data is locked each time anyone want to access it, preventing any other to read or write this share while it is “locked”. A more clever approach is to lock only on write: Readers/Writer Lock.
If you are using mutlithreading in Python, you have some options. But if you are using Twisted and want to lock a share that are access but several, concurrent deferreds, using this pattern, you don’t have any other option than to develop your own. I present in the rest of this post the module I have developed to bring Readers/Writer Lock to Twisted: txrwlock.
I just came accross a 2012 video about Guake, it is so cool to see that there are people that take time and money to present open source software. This video speaks about an old version of Guake, before I took over it, but it has been a very good surprise to find it on Youtube!
I like to simplify my workflow. I have plenty of aliases to speed up the execution of recurrent tasks, such as rebasing against “master” when I want to submit a patch to a project on GitHub.
For that, I set the git branch where I am hacking to track the upstream remote, not origin. Here is why.
I am working on a set of patches for the Apache Spark project job to ease the way to deploy complex Python program with external dependencies. One should be able to deploy job as easy as it should be, and Wheels make this job really easy.
Deployment is never a fascinating task to do, we as developer want our code to work in production exactly how it does on our machine. Python was never really good at deployment, but in recent years, it became easier and standardized to package a project, describes in a unified way its dependencies and have them installed properly with Pip, isolated inside virtualenv. It is however not obvious at first sight for non-Pythonista experts and there are several tasks to do to make everything automatic for Python package developer, and so, for a PySpark developer as well.
I describe in this blog post some thoughts on how PySpark should allow users to deploy Python applications and no more simple Python scripts, by handling Wheels and isolated virtual environments.
The main idea behind this proposal is to let developers handle the Python environment to deploy on executors instead of being jailed by what is actually installed on the Spark’s Executors Python envionment. If you agree with this approach, please add a comment in the JIRA ticket for speeding up the integration inside Spark 2.x soon.