Finally an update!

The last post was in 2013, and the one before that inaugurated 2012 with a "Happy New Year!". That's several Happy New Year's since then, and much has changed in the meantime.

I moved from Dietikon to Zürich in 2014. New apartment, new roommates. I completed my Bachelor's degree in Informatics, Software-Systems in 2015. Finally.

I started working for a great company in 2013, iniLabs Ltd., a spin-off from the Institute of Neuroinformatics (INI) here at the University of Zürich, that works on neuromorphic hardware, specifically bio-inspired vision sensors. Had the opportunity to work with IBM's TrueNorth development team on integrating the sensors with their platform in 2013-2014 as part of the DARPA SyNAPSE project. Met lots of great people, went several times to California (US), partecipated in the 2015 Telluride (Colorado, US) Neuromorphic Engineering Workshop, met even more awesome people. Two fun years working on embedded hardware, low-level C libraries, VHDL FPGA logic and Java GUIs, everything I ever wanted. And it's all set to continue, as we're currently expanding our offering of neuromorphic devices.

On the open-source front, I started contributing to usb4java in July 2013 due to my work at iniLabs, where we use it extensively in the jAER project to talk to the vision sensors in a performant and platform-independant way, as well as in the Flashy project, a tool to update firmware and logic on our sensor devices. Also almost all of the code I've worked on is available openly from the jAER project or the iniLabs GitHub pages. In 2013 I moved my own projects from self-hosted SVN to Git & GitHub, including the source for this blog. Great service.

After over a decade of self-hosting, I moved everything over to Google Apps. Very happy with not having to care about any of that anymore, I just didn't have the time for server maintainance.

Photos of San Francisco, Colorado, New York, Yellowstone, London and other places I visited can be found in the new gallery, powered by Google Drive. I took most of them during the 2015 road-trip through central US with my good friend Diederik Moeys, a PhD here at INI.

I've gone through all the pages in the blog here and updated them, so they should reflect current reality better. I'm hoping to keep the blog more up-to-date in the future. I've promised myself I'd use it to document the resurrection of my oldest hobby: N-scale model trains. More on that soon.

Posted by Luca Longinotti on 01 Oct 2016 at 18:00
Categories: Website, UZH, NTrains, usb4java, Trips, Longi, iniLabs, Software Comments

An even more secure SSH

First post of 2012, so let's start off with a "Happy New Year!" to everyone.
On an even happier note, I just got word that I passed all my exams. :-)

Now the real topic of this post is SSH, more specifically how to make your SSH connections even more secure than they already are. OpenSSH by default prefers slightly less strong cryptographic algorithms (like AES128 is preferred to AES256), and for its HMAC it still prefers MD5-based HMACs, which, while still kinda secure, are clearly less secure than the SHA2-512 based ones, for which OpenSSH added support in the 5.9 release.
Assuming you're running OpenSSH >=5.9 everywhere, like in my setup, configure your sshd's as following, so that they will only offer the most secure known algorithms in their strongest variants first. This will also only offer SSH protocol 2, as well as set some other miscellaneous login-related settings and make the server check periodically that clients are alive, and if not, terminate the connection.

Protocol 2
LoginGraceTime 1m
PermitRootLogin no
StrictModes yes
MaxAuthTries 3
MaxSessions 5
ClientAliveCountMax 3
ClientAliveInterval 5
Ciphers aes256-ctr,aes192-ctr,aes128-ctr
KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256
MACs hmac-sha2-512,hmac-sha2-256

Configure your SSH client as follows to only connect to sshd's using secure algorithms, again trying the strongest first. This also enables SSH protocol 2 only, periodically checks that the server is alive (especially useful with sshfs and its '-o reconnect' flag, when working over unstable links like wireless). It further lowers the amount of data needed for a rekey, default would usually be between 1G and 4G.
Note that I had to split up some lines for better readability on the blog, you can notice those by the increased indentation, just always make sure everything is on one line!

Host *
  Protocol 2
  ServerAliveCountMax 2
  ServerAliveInterval 4
  Ciphers aes256-ctr,aes192-ctr,aes128-ctr,arcfour256,aes256-cbc
  KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,
  MACs hmac-sha2-512,hmac-sha2-256,hmac-md5,hmac-sha1
  RekeyLimit 512M

Given both the server and client running OpenSSH >=5.9 and being configured correctly, you get an SSH connection using AES256-CTR as cipher, exchanging keys using ECDH-SHA2-NISTP521, and using HMAC-SHA2-512 for integrity checking. Basically AES-256 and SHA2-512 everywhere, which, as far as I know, are state-of-the-art in their respective application domains and still considered very secure.
Hope this helps increasing security, as well as reliability (the Alive options especially with sshfs).

Posted by Luca Longinotti on 16 Feb 2012 at 15:00
Categories: Longi, Gentoo, Software Comments

CUPS EvenDuplex

Another remind-myself blog-post.
If you've got a printer like mine, which accepts PCL6, and expects there to always be an even number of pages when doing duplex printing, CUPS has a very easy solution for you. Open-source software usually always has.

*cupsEvenDuplex: True

Add that to your PPD file, and voilà, enjoy duplex working also when submitting three pages for printing!

Posted by Luca Longinotti on 01 Nov 2011 at 18:04
Categories: Hardware, Software Comments

Software patents: WTF?

Disclaimer: I am not a lawyer, the following depicts my understanding of the issues at hand and my opinion. It is not legal counsel.
A discussion on StackOverflow I was having with the author of liblfds quickly went from technical to commercial/legal: software patents were mentioned, and the world became a little less clear...
Basically it boils down to the possibility that maybe, some algorithms/techniques/technologies implemented by Rig may be patented, and thus there might be legal questions about the usability of Rig in the US. I specify US here, because the EU doesn't recognize software patents to the extent of US patent law, where you can theoretically patent anything, which continuously results in totally bogus patents, like this one, if we're talking about data-structures and algorithms. As a Swiss citizen living in Switzerland I didn't even really ever think about this, since here software patents do not exists at all in the form possible in the US, anything purely algorithmic is hardly patentable.
As here we simply don't care about this possibility, if you want to code something, you do, also being part of the academic community, it's almost taken for granted that working on and off others research, improving, implementing etc. is possible and even encouraged, especially in an open-source fashion. So no, I wasn't trying to deceive anyone here, it just never entered my mind as a real concern that needs significant time dedicated to it. But I'm doing that now, so let's address Mr. Douglass' concerns:

A) the BSD license is incompatible with possible patents, meaning the license of code is related to possible patent claims

This is incorrect. License, copyright and patents are different things, even if somewhat related. It is perfectly possible to create code and license it, even if it knowingly violates granted patents, see the LAME case for an obvious example. Usually the license has nothing at all to do with possible patents that said code may be infringing, a code's license attributes the copyright and distribution as well as usage constraints upon the specific code in question. Patents are about ideas and techniques, especially the software ones, they usually just describe a procedure, which can be implemented in many a different way.
Conclusion: licenses relate to the particular code of the particular implementation, they just tell you what kind of restrictions I, as the author of the code, pose upon the code itself, with regards to distribution, use and re-use, linkage, etc.. There are licenses, like the GPL v3.0 and the Apache License, that do have extra bits and pieces that also regulate patent claims. The GPL v2.0, LGPL, BSD and most others don't give you any special rights regarding patents whatsoever, unless the code is specifically released by the owner of the patents in the first place, which may then forbid later patent infringement suits on compliant derivative works. This again very much depends on the exact wording of the license, and the patent still remains an independent entity.

B) RCU is okay to use and there will be no possible patent-related problems, because it's LGPL

I have no idea where this comes from. The LGPL paragraphs 11 and 12 specifically do not grant you any right whatsoever regarding patents held over what the code (liburcu in this case) does, it actually says that, if faced with patent infringement suits, you must try to comply with both the suit, as well as the license concerning distribution, meaning you either geographically restrict the software in such a case or stop distributing it altogether. Just because there are LGPL implementations of RCU (and others exist under various other licenses, like in the Linux kernel under the GPL v2.0), doesn't automatically mean that you're granted full usage of the patents on the RCU technique, patents which are fully filed, existing and valid, from Wikipedia: "The technique is covered by U.S. software patent 5,442,758, issued August 15, 1995 and assigned to Sequent Computer Systems, as well as by 5,608,893, 5,727,528, 6,219,690, and 6,886,162. The now-expired US Patent 4,809,168 covers a closely related technique."
As such, the fact an LGPL implementation of RCU exists, doesn't automatically grant you any right upon the RCU patents themselves, nor does it make them automatically freely usable for other implementations.
The fact that Paul McKenney provides a GPL implementation in the kernel may be of help here, as the main patent holder, his offering a GPL licensed implementation does protect users of said GPL implementation and derivatives, again under what the GPL defines as such, from any patent claims.
Regarding the GPL: "This means that a patent holder who distributes a software package incorporating his patent can no longer assert that patent against people who distribute that package further or incorporate the package in their own product. Asserting a patent restricts the rights granted by the GPL and therefore is not permitted. This means that a competitor is now free to incorporate that package in his own product without having to pay any royalty to the patent holder. Of course that part of the product (and all other parts based on that part) will have to be made available under the GPL."
If such then directly translates to the LGPL user-space implementation is unclear, but given the involved parties it's very likely. It still doesn't mean any other, new RCU implementation has any rights on the RCU patents, those are still there and valid. You either need to base your work directly on the available GPL/LGPL code and release it under the same license, or get explicit permission from the patent holders.

C) There may be a patent on the non-blocking list or its delete-bit

Harris actually patented the whole thing. I'm not sure at all about the delete-bit alone, it can trivially be shown that the technique of using the unused bits of a pointer to store information was widely known and used before the 2000 patent. Just take a look at Lisp machines and Tagged pointers. Several implementations exist, work on extending this was done by various sources, there is code and books on this ("The Art of Multiprocessor Programming"). I have no idea what this means from a legal point of view. Non-profit use and research seem to be fully okay. My own code is not a 1:1 implementation of what Harris describes, I use a two-dimensional list, providing KeyNodes at intervals to start re-traversal from a shorter, guaranteed point, and I handle restart of traversal differently, trying a tighter path first.

D) There may be a patent on SMR using Hazard Pointers

There is a patent application, no patent. No idea here either, it is used and extended in other papers (RCU+HP, Ref-counts+HP). Several implementations exist, under various licenses, even of the derivative papers... I am using the concepts too.
In any case, I have to compliment Maged Michael for his papers, those are awesome, very clear and well written.

E) I'm intentionally mis-informing and deceiving users of my library

Hell, NO! As I explained above, software patents are just a non-issue here, it's something you just don't really think about, other than to laugh at Slashdot, ArsTechnica & co. news about the latest patent granted in the US on warm water and double-clicking an icon. With this blog post, linked on the library's home-page, I'm remedying this for the concerned US citizen.

I believe patents on ideas and abstract concepts, especially software, are fundamentally wrong, they realistically only protect the lazy implementor. Especially with the situation as it is now in the US, realistically, shut down your computer and search for a new job, maybe something involving nature (but they're patenting that stuff too...), because I'd really like for you to prove that just glibc, Gnome, KDE are completely safe from any patent-related question. Even if you own the patent yourself, you can't be totally sure no-one else has patented something similar before, and even less sure if and what that means for you. Even the big players have no real idea what's going on, just look at the various browser vendors and Google on VP8 / H264, or take a look at this graph and tell me how long it took you to either explode in laughter, amusedly shake your head, or both.

A few more resources on this:

In the meantime, I'll happily continue coding on my open-source library, learning new things, experimenting and benchmarking and having fun. And I hope others find this freely given work useful, and may use the library themselves, because it's there and works and may make their life easier.

Posted by Luca Longinotti on 08 Jul 2011 at 11:57
Categories: Rig, CompSci, Software Comments

SSD crazy fast!

Finally replaced the old HD with a SSD I bought in January on my workstation, an OCZ Vertex 2 in 3.5" format, so that it fits in the hot-swap trays I've got.
It's actually surprising that so few vendors have 3.5" editions, as that's what practically all desktops, workstations and servers do have, especially considering hot-swap trays and similar drive bays.
Sure, with 2.5" you can pack more stuff into new-generation servers, but those are still incredibly expensive, and it makes sense on laptops, but that's pretty much it.
Anyway, the results are there: disk operations are crazy fast compared to before. Boot is incredibly fast, actually with OpenRC now, so fast that getting an IP via DHCP was the dominant factor, and changing to a static IP eliminated that one too.
I'm very satisfied with this, and ext4 with the 'discard' option (TRIM support) seems to work perfectly fine.

On the Rig front, I've not done much: some more work went into testing, the typeinfo stuff was completely removed, and a few more checks with regards to sizes and permitted flags were added.

Another project that I'll probably tackle soon is writing a build system that doesn't suck, and that tries to really be minimal, and not support the world and more, it just needs to generate Makefiles (and Visual Studio/Eclipse support probably too). The build itself is left to the relevant tools, this just really needs to gather info about where we're running and the features we want, make that info available to the user (some header file), and generate appropriate Makefiles, which don't depend back on the generator itself, so that you can also just generate generic Makefiles and not need to have the generator installed on every system with all its dependencies. I mostly want to get rid of CMake and its horrible mess of half-baked modules. Anyone wants to help here? It's going to be in Python, and it should support only C/C++ builds.

Posted by Luca Longinotti on 13 May 2011 at 10:10
Categories: Hardware, Rig, Software Comments

KVM, slow IO and strange options

In my quest for portability, I wanted to test a few things on several operating systems, mostly BSDs and Sun Oracle Solaris.
Seeing as virtualization is the current hype, I decided to give Linux KVM a try, as it promised to be the more open solution, while requiring less effort to setup, which in my case, for a few dev-VMs to try stuff on, is kinda important, I don't want to spend hours maintaining this setup, but I also don't expect stellar performance to run heavy workloads on it.
Gentoo makes the installation quite easy, all you need is to enable KVM in your kernel and emerge app-emulation/qemu-kvm.

  • clearly the kernel needs to have KVM support enabled for your CPU, but I have all the VirtIO stuff disabled, I don't need it and I tried VirtIO-blk to speed-up IO performance, but didn't notice any difference, it doesn't probably do much when you only have 1-2, max. 3 VMs running at any time, with not that much going on in them, for development.
  • qemu-kvm, careful of the USE flags and the QEMU_*_TARGETS!

package.use entries:

media-libs/libsdl X audio video opengl xv
app-emulation/qemu-kvm aio sdl
# remember "alsa" if you use it, for both packages!

make.conf entries:

QEMU_SOFTMMU_TARGETS="arm i386 ppc ppc64 sparc sparc64 x86_64"

'aio' is important for native AsyncIO support and 'sdl' to get a window with your VM in it (unless you always want to use VNC to connect). Most people can also probably reduce QEMU_SOFTMMU_TARGETS to "i386 x86_64", but I wanted to keep the option to emulate some alternative architectures.
Once that's all done, KVM worked perfectly, and I started installing a Xubuntu image just to test it, but noticed that IO was incredibly slow, and set out to find out how to better its performance, I ended up with the following two Bash functions to install VMs from ISOs and start them, to get a somewhat usable performance. The options are explained below.

# KVM support
kvm-start() {
    /usr/bin/kvm -net nic,macaddr=random -net user -cpu host -smp 4 -m 768 -usb
    -usbdevice tablet -vga cirrus -drive file=$1,cache=writeback,aio=native
kvm-install() {
    /usr/bin/qemu-img create -f raw $1 6G
    /usr/bin/kvm -net nic,macaddr=random -net user -cpu host -smp 4 -m 768 -usb
    -usbdevice tablet -vga cirrus -drive file=$1,cache=writeback,aio=native
    -cdrom $2 -boot d
  • -drive's cache=writeback,aio=native are crucial for storage performance, while aio helped just a little, changing the cache mode to writeback massively improved IO performance! Also, raw disk images do perform better than qcow2!
  • -cpu host -smp 4 -m 768 passes along all available CPU features, and raising memory from the default 128 helps too.
  • -usb -usbdevice tablet was needed to fix the broken mouse (it just didn't react at all in my case!), it also makes it possible to drag the mouse off the screen of the VM and back without having to always CTRL+ALT, but this also kinda depends on the OS you're emulating.
  • -vga cirrus enables support for resolutions up to 1024x768 and has very good compatibility all around. You could use -vga vmware for Linux guests to get very high resolutions, but it doesn't work that well with other (especially older) operating systems.
  • -net nic,macaddr=random -net user is for the standard, software routed networking, documented as "slow", but more than fast enough for development work (of course not for some kind of high-traffic thousands-of-connections server). Remember to set a valid, random MAC address!

Posted by Luca Longinotti on 08 Feb 2011 at 17:40
Categories: Gentoo, Software Comments

Nouveau ++ and HAL --

I finally did it: I tried out Nouveau, the open-source driver for Nvidia graphics cards, and everything went well, my dual head setup works as before, thanks also to XMonad, which is one of the few window-managers that implements virtual desktop management and multi-head setups the right way.
I've waited this long to be sure it all worked and got tested by lots of other people before me, as I simply can't have the main workstation not displaying anything and spend days getting stuff from Git repositories to try out fixes.
Needed a moment to get how XRandr wants the position of monitors specified in xorg.conf, but in the end everything worked out well, and I managed to also massively slim down my Xorg configuration.
So now I have a kernel with no proprietary drivers, and that also means I can finally build a monolithic hardened kernel, without any modules. Works great!
2.6.37 will also bring Temperature Sensors support to Nouveau from what I'm told, I'm waiting on that!
This also brings a fully hardened desktop a little bit closer, as every binary piece of software gone is a problem less there.

I also got fully rid of HAL, since it's being deprecated, and thanks to uam and pmount I can still mount/unmount USB drives, having only udev running, and I also don't need any of the Policy/Console/Udisk-Kit stuff, that I hope never to have to install.
And I'm taking Midori for a test-drive, looking for a good alternative browser to Firefox, maybe it will be, maybe it won't.

Posted by Luca Longinotti on 04 Jan 2011 at 17:29
Categories: Gentoo, Software Comments

A few useful pieces of software

Continuing my series about useful software I use daily, I decided to finish it up quickly by just posting a few names, links and descriptions.

  • XMonad - tiling window manager, totally changed how I interact with my desktop, the keyboard is a much more efficient way to do things ;)
  • LLVM + Clang - new virtual machine / compiler infrastructure and C/C++ compiler based on it, much faster than GCC and with much more helpful error messages, but not with all of its extensions and features
  • Eclipse - the open-source IDE, makes programming faster and more fun!
  • CDT for Eclipse - C/C++ plug-in for Eclipse, makes developing C projects that much easier
  • PyDev for Eclipse - Python plug-in, to support your favorite scripting language better ;)
  • SSHFS - mounts remote file-systems over SSH, providing strong encryption and authentication (uses the FUSE framework on Linux)

I'll soon start posting about my latest software project, Rig, which I have been working on for quite a while, so stay tuned!

Quick events guide:

  • 4 November, big ASTAZ party (aka. Free-Alcohol) @ Dynamo Zürich
  • 4-18 November, ExpoVina 2010 @ Bürkliplatz Zürich

Posted by Luca Longinotti on 30 Oct 2010 at 00:32
Categories: Longi, Software Comments

UZH wlan using WPA2

It is possible! ;)
Instead of going through the "public" WLAN and then using the VPN, you can just connect to the "uzh" SSID and use its WPA2 encryption.
This usually works much better, on "public" I sometimes loose the connection or can't connect at all...
I found this out by just trying to connect to it using my UniAccess Login-Data and the same encryption-scheme I used for the "eth" network at ETHZ, which is basically WPA2 Enterprise with IEEE802.1x Authentication, and look there, it worked!
I have no idea why the ID (Informatik-Dienste) don't mention the possibility of using the "uzh" network, as it is clearly superior in its implementation and security. Here the WICD encryption-scheme file I use, based on the one by a friend of mine (Lukas Manser) for the ETHZ network.


name = UZH Network WPA2
author = Luca Longinotti
version = 1
require identity *Identity password *Password
ctrl_interface = /var/run/wpa_supplicant
network = {
    pairwise=CCMP TKIP
    group=CCMP TKIP

One can probably extract the relevant information from here even for other OSes. Have fun!


It seems on the 20th of October (according to Google) UZH updated their pages to mention the possibility of using the "uzh" SSID and WPA2, and added instructions for it here. They also mention that in the future it's going to be the main SSID and to migrate to it, if possible. So now it's official!

Posted by Luca Longinotti on 23 Oct 2010 at 19:45
Categories: UZH, Software Comments

Get your mails quick using claws-mail

I've been a long time advocate and user of Mozilla Thunderbird as email client, but with version 3 the already bloated software just got worse, and most of its new features were just useless to me, so I started searching for a much more lightweight graphical email client, and found claws-mail to be a perfect fit.
I now use it since a few months and am really happy with it, it's blazing fast and really only provides useful features at its core, while leaving more up to plugins.
One such plugin I use is RSSyl, which aggregates your feeds like a bunch of mail folders, and each entry is presented like an email.
There even is a Windows port of claws-mail (together with gpg4win), so even Windows users can try it!

Posted by Luca Longinotti on 27 Sep 2010 at 19:00
Categories: Software Comments

Next Page >> (Page 1 of 2)