# December 20, 2013

## Lars Wirzenius <!-- document.write( "<a href=\"#\" id=\"http://blog.liw.fi/posts/gitano-setup/_hide\" onClick=\"exclude( 'http://blog.liw.fi/posts/gitano-setup/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.liw.fi/posts/gitano-setup/_show\" style=\"display:none;\" onClick=\"show( 'http://blog.liw.fi/posts/gitano-setup/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Configuring gitano

I recently set up a Gitano instance as http://git.liw.fi/. Gitano is a very nice git server, which adds user and group management and access control in front of git itself, and keeps all configuration in git, where it's traceable and auditable nicely. It's also a command line based thing, rather than a slow, resource-hungry web application, and so much more to my liking.

Daniel, the Gitano upstream, has a "gitano-all" source tree for creating an unofficial Gitano Debian package, which includes cgit, a fast git web interface. This is not ever going to be accepted in Debian, of course, but it makes it easier to install Gitano on your server. This hanky-panky is needed because Gitano and cgit both use libgit2, and that's a library without a stable API at this time. This makes is difficult to package for Debian. cgit seems to embed other such projects as well.

Anyway, once you've installed the Gitano software (and cgit, if you want that), there's the matter of setting up a Gitano instance.

Each Gitano instance is its own Unix user, accessed over ssh. Thus, one machine can host any number of Gitano instances, and they'll be nicely isolated from each other by normal Unix setup. Each instance manages its own set of Gitano users and group, which only exist within that instance. Users are identified by ssh public keys: there are no passwords.

Here's my slightly edited checklist for setting up a Gitano instance. It assumes Gitano and cgit and their dependencies are installed.

Setup Gitano itself:

• create the Unix user
• I chose git as the username, so that git@git.liw.fi is my Gitano instance
• copy your ssh public key to the system; you'll need it for gitano-setup
• the key file needs to be readable by the Gitano instance Unix user
• run gitano-setup as the Gitano instance user
• su - git
• gitano-setup
• answer questions: I chose defaults for most things
• if you screw this up, you can start over by deleting everything in the home directory
• from your own ssh account: ssh git@host whoami
• this should produce some output telling you you're in the gitano-admin group
• if that works, Gitano is correctly setup

Setup a git daemon for public git repositories:

• edit /etc/inetd.conf to add (though all on one line, this is broken on several lines for display purposes):
git stream tcp nowait nobody /usr/bin/git
git daemon --inetd
--interpolated-path=/home/git/repos/%D /home/git/repos
• /etc/init.d/openbsd-inetd restart

Setup cgit and Apache:

# CGIT stuff
DirectoryIndex /cgi-bin/cgit/cgit.cgi
Alias /cgit.png /usr/share/cgit/htdocs/cgit.png
Alias /cgit.css /usr/share/cgit/htdocs/cgit.css
<Directory "/home/git/repos">
Allow from all
AllowOverride none
Order allow,deny
</Directory>
• /etc/init.d/apache2 restart
• create /etc/cgitrc:
# Enable caching of up to 1000 output entriess
cache-size=1000

# Specify some default clone prefixes
clone-prefix=git://testgit

# Specify the css url
css=/cgit.css

# Specify the logo url
logo=/cgit.png

# Show extra links for each repository on the index page

# Show number of affected files per commit on the log pages
enable-log-filecount=1

# Show number of added/removed lines per commit on the log pages
enable-log-linecount=1

# Set the title and heading of the repository index page
root-title=testgit
root-desc=Lars's test git repositories

snapshots=tar.gz

#source-filter=/usr/lib/cgit/filters/syntax-highlighting.sh

remove-suffix=1

enable-git-config=1

strict-export=git-daemon-export-ok

scan-path=/home/git/repos

##
## List of common mimetypes
##
mimetype.git=image/git
mimetype.html=text/html
mimetype.jpg=image/jpeg
mimetype.pdf=application/pdf
mimetype.png=image/png
mimetype.svg=image/svg+xml

Finally, you should review, and possibly alter, Gitano access control rules.

• commit and push

Some Gitano commands:

• ssh git@YOURHOST create foo
• ssh git@YOURHOST ls

Happy hacking.

PS. I wrote a yarn test suite for my Gitano ACL, which may be interesting if you're new to Gitano.

## Riku Voipio <!-- document.write( "<a href=\"#\" id=\"http://suihkulokki.blogspot.com/2013/12/replicant-on-galaxy-s3.html_hide\" onClick=\"exclude( 'http://suihkulokki.blogspot.com/2013/12/replicant-on-galaxy-s3.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://suihkulokki.blogspot.com/2013/12/replicant-on-galaxy-s3.html_show\" style=\"display:none;\" onClick=\"show( 'http://suihkulokki.blogspot.com/2013/12/replicant-on-galaxy-s3.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Replicant on Galaxy S3

I recently got my self and Galaxy S3 for testing out Replicant, an android image made out of only open source components.

### Why Galaxy S3?

It is well supported in Replicant, almost every driver is already open source. The hardware specs are acceptable, 1.4Ghz quad core, 1GB ram, microsd, and all the peripheral chips one expects for a phone. Galaxy S3 has sold insanely (50 million units supposedly), meaning I won't run out of accessories and aftermarket spare parts any time soon. The massive installed base also means a huge potential user community. S3 is still available as new, with two years of warranty.

### Why not

While the S3 is still available new, it is safe to assume production is ending already - 1.5 year old product is ancient history in mobile world! It remains to be seen how much the massive user base will defend against the obsolescence. Upstream kernel support for "old" cpu is open question, replicant is still basing kernel on vendor kernel. Bootloader is unlocked, but it can't be changed due to trusted^Wtreacherous computing, preventing things like boot from sd card. Finally, not everything is open source, the GPU (mali) driver while being reverse engineered, is taking it's time - and the GPS hasn't been reversed yet.

### Installing replicant

Before install, from the original installation, you might want to take a copy of firmware files (since replicant won't provide them). enable developer mode on the S3 and:
sudo apt-get install android-tools
mkdir firmware
After then, just follow official replicant install guide for S3. If you don't mind closed source firmwares, post-install you need to push the firmware files back:

mount -o remount,rw /system
Here was my first catch, the wifi firmwares from jelly bean based image were not compatible with older ICS based replicant.

### Using replicant

Booting to replicant is fast, few seconds to the pin screen. You are treated with the standard android lockscreen, usual slide/pin/pattern options are available. Basic functions like phone, sms and web browsing have icons from the homescreen and work without a hitch. Likewise camera seems to work, really the only smartphone feature missing is GPS.

Sidenote - this image looks a LOT better on the S3 than on my thinkpad. No wonder people are flocking to phones and tablets when laptop makers use such crappy components.
The grid menu has the standard android AOSP opensource applications in the ICS style menu with the extra of f-droid icon - which is the installer for open source applications. F-droid is it's own project that complements replicant project by maintaining a catalog of Free Software.
F-droid brings hundreds of open source applications not only for replicant, but for any other android users, including platforms with android compatibility, such as Jolla's Sailfish OS. Of course f-droid client is open source, like the f-droid server (in Debian too). F-droid server is not just repository management, it can take care of building and deploying android apps.
The WebKit based android browser renders web sites without issues, and if you are not happy with, you can download Firefox from f-droid. Many websites will notice you are mobile, and provide mobile web sites, which is sometimes good and sometimes annoying. Worse, some pages detect you are android and only offer you to load their closed android app for viewing the page. OTOH I am already viewing their closed source website, so using closed source app to view it isn't much worse.

This keyboard is again the android standard one, but for most unixy people the hacker's keyboard with arrow buttons and ctrl/alt will probably be the one you want.

### Closing thoughts

While using replicant has been very smooth, the lack of GPS is becoming a deal-breaker. I could just copy the gpsd from cyanogen, like some have done, but it kind of beats the purpose of having replicant on the phone. So it might be that I move back to cyanogen, unless I find time to help reverse engineering the BCM4751 GPS.

20 December, 2013 08:41PM by Riku Voipio (noreply@blogger.com)

## Christoph Egger <!-- document.write( "<a href=\"#\" id=\"http://www.christoph-egger.org/weblog/entry/46_hide\" onClick=\"exclude( 'http://www.christoph-egger.org/weblog/entry/46' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://www.christoph-egger.org/weblog/entry/46_show\" style=\"display:none;\" onClick=\"show( 'http://www.christoph-egger.org/weblog/entry/46' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

Greetings from the FAU Security Team (FAUST), the Uni Erlangen CTF group. We were participating in the RuCTFe competition and made it to 4th place. Following is my write-up on the nsaless service, the main crypto challenge in the competition. nsaless is a nodejs webservice providing a short message service. People can post messages and their followers receive the message encrypted to their individual RSA key.

The gameserver created groups of 8 users on the service 7 were just following the first user (and authorized by the first user to do so) while the first user sent a tweet containing the flag. The service used 512bit RSA with 7 as public exponent. While RSA512 is certainly weak, it's strong enough to make it unfeasible to break directly.

### Attacking RSA

There are some known attacks against RSA with small exponents if no proper padding is done. The most straightforward version just takes the e-th root of the cipher-text and, if the clear message was small enough, outputs that root as plain-text. As the flag was long enough to make this attack impossible, we need a somewhat improved Attack.

Reminder:

• In RSA, given a plain-text A, the sender computes Aᵉ mod N to build the cipher-text B.
• Given simultaneous congruences we can efficiently compute a x ∈ ℤ such that x satisfies all congruences using the Chinese remainder theorem.

For NSAless we actually get several such B for different N (each belonging to different users receiving the tweet because they follow the poster). This effectively means we get Aᵉ in mod N for different N. Using the Chinese remainder theorem we can now compute a x ∈ ℤ ≡ Aᵉ mod Π Nᵢ. If we use at least e different B for this we are guaranteed that x actually equals Aᵉ (in ): A needs to be smaller than N for all N used (otherwise we lose information during encryption), therefore Aᵉ needs to be smaller than Nᵉ.

Computing now the e-th root of x we get the plain-text A – the flag.

### Fix

Fixing your service is easy enough, just increase e to an suitable number > 8. At the end of the contest 5 Teams had fixed this vulnerability by either using 17 or 65537.

### EXPLOIT

The basic exploit is shown below. Unfortunately it needs to retrieve all tweets for all users the compute the flags which just takes too long to be feasible (at least at the end of the competition where tons of users already existed) so you would need some caching to make it actually work. Would have been a great idea to have users expire after an hour or two in the service!

#!/usr/bin/python

import httplib
import urllib
import re
import json
import pprint
import gmpy
import sys

userparse_re = re.compile('<a [^>]*>([^<]*)</a></div>\s*<div>([^<]*)</div>')
tweetparse_re = re.compile("<div id='last_tweet'>([0-9]+)</div>")
followingparse_re = re.compile('<div><a href="/[0-9]+">([0-9]+)</a></div>')

def my_parse_number(number):
string = "%x" % number
if len(string) != 64:
return ""
erg = []
while string != '':
erg = erg + [chr(int(string[:2], 16))]
string = string[2:]
return ''.join(erg)

def extended_gcd(a, b):
x,y = 0, 1
lastx, lasty = 1, 0

while b:
a, (q, b) = b, divmod(a,b)
x, lastx = lastx-q*x, x
y, lasty = lasty-q*y, y

return (lastx, lasty, a)

def chinese_remainder_theorem(items):
N = 1
for a, n in items:
N *= n

result = 0
for a, n in items:
m = N/n
r, s, d = extended_gcd(n, m)
if d != 1:
raise "Input not pairwise co-prime"
result += a*s*m

return result % N, N

def get_tweet(uid):
try:
conn = httplib.HTTPConnection("%s:48879" % sys.argv[1], timeout=60)
conn.request("GET", "/%s" % uid)
r1 = conn.getresponse()
tweet = re.findall(tweetparse_re, data)
if len(tweet) != 1:
return None
followers = re.findall(followingparse_re, data)
return tweet[0], followers
except:
return None

def get_users():
conn = httplib.HTTPConnection("%s:48879" % sys.argv[1], timeout=60)
conn.request("GET", "/users")
r1 = conn.getresponse()
data = dict()
for i in re.findall(userparse_re, data1)[:100]:
userinfo = get_tweet(i[0])
if userinfo != None:

return data

users = get_users()
allusers = users.keys()
masters = [ user for user in allusers if len(users[user][1][1]) > 0 ]

for test in masters:
try:
followers = users[test][1][1]
data = []

for fol in followers:
n = int(users[fol][0])
tweet = int(users[fol][1][0])
data = data + [(tweet, n)]

x, n = chinese_remainder_theorem(data)

realnum = gmpy.mpz(x).root(7)[0].digits()
print my_parse_number(int(realnum))
except:
pass

## Dirk Eddelbuettel <!-- document.write( "<a href=\"#\" id=\"http://dirk.eddelbuettel.com/blog/2013/12/19#rcppeigen_0.3.2.0.1_hide\" onClick=\"exclude( 'http://dirk.eddelbuettel.com/blog/2013/12/19#rcppeigen_0.3.2.0.1' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://dirk.eddelbuettel.com/blog/2013/12/19#rcppeigen_0.3.2.0.1_show\" style=\"display:none;\" onClick=\"show( 'http://dirk.eddelbuettel.com/blog/2013/12/19#rcppeigen_0.3.2.0.1' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### New RcppEigen release 0.3.2.0.1 -- and a new maintainer

In a recent email to the Rcpp and lme4 mailing lists, Doug Bates announced that was turning away from R, Rcpp, lme4 and hence also RcppEigen for which he had been both the primary author and maintainer.

This is huge loss for the R community. I have known Doug since the 1990s. He has been a fairly central figure around R during all those years in which I got more and more involved with R. I have learned a lot from him, and enjoyed the work together---initially on the Debian R package (which I took over from him), and all the way to joint work on Rcpp and RcppEigen, including our JSS paper. I am certain to miss him around R.

Now, in order to keep RcppEigen viable within CRAN and the R ecosystem, I have offered to maintain it. A first new upload is now on CRAN (and I also uploaded it to Debian where I started to maintain it too as a depedency for lme4). I have also started to make a few minor changes such as tightening Suggests: a little, and editing a few descriptive files. Details are in the Github repo.

Questions, comments etc about RcppEigen should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

# December 19, 2013

## Joachim Breitner <!-- document.write( "<a href=\"#\" id=\"http://www.joachim-breitner.de/blog/archives/634-My-contribution-to-XKCDs-949-is-not-needed.html_hide\" onClick=\"exclude( 'http://www.joachim-breitner.de/blog/archives/634-My-contribution-to-XKCDs-949-is-not-needed.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://www.joachim-breitner.de/blog/archives/634-My-contribution-to-XKCDs-949-is-not-needed.html_show\" style=\"display:none;\" onClick=\"show( 'http://www.joachim-breitner.de/blog/archives/634-My-contribution-to-XKCDs-949-is-not-needed.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### My contribution to XKCD’s #949 is not needed

A few hours ago I posted about share-file, a small tool to make a local file available at a public URL so that it is transferred from my machine, without being stored at some central server. I wanted to create something like FileTea, but usable from the command line.

My post made it to HackerNews (which brought my server to its knees for a while; let’s say I had the opportunity to find out what setting for apache’s MaxClient option works on my machine...), where the comments (besides some misunderstanding about the notable features of share-file) contained interesting links related to solving #949:

• First of all, James Brechin implemented a command line client for FileTea. It is not clear to me whether he created it in response to my posting, or whether it lay around for a while, but I don’t care: It does precisely what I need, making share-file obsolete for me. Lazyweb works!
• There is a service very similar to FileTea, but seemingly developed independently, at https://fipelines.org. It seems that this task is best solved using unorthodox programming language choices; while FileTea uses C and glib (not common in web applications), fipes, the software behind fipelines, is implemented in Erlang.
• The new and shiny thing for peer-to-peer communication without additional software is WebRTC, which only needs a modern browser, and there is a chat- and file-transfer program at https://rtccopy.com/. I did not test how well it handles the case where both sides are behind a NAT, though, and sending a ready-to-use link is still simpler for the other side.
• I found another WebRTC-based tool that focuses on sharing files only, and where users who have downloaded a file will, as long as their browser tab is open, help uploading it –  a bit like Bittorrent. So if you need to share a file with many people (all of which are using up-to-date non-IE browsers), sharefest is worth a try.

19 December, 2013 11:05PM by nomeata (mail@joachim-breitner.de)

## Matthew Palmer <!-- document.write( "<a href=\"#\" id=\"http://www.hezmatt.org/~mpalmer/blog/2013/12/20/i-am-officially-smarter-than-the-internet.html_hide\" onClick=\"exclude( 'http://www.hezmatt.org/~mpalmer/blog/2013/12/20/i-am-officially-smarter-than-the-internet.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://www.hezmatt.org/~mpalmer/blog/2013/12/20/i-am-officially-smarter-than-the-internet.html_show\" style=\"display:none;\" onClick=\"show( 'http://www.hezmatt.org/~mpalmer/blog/2013/12/20/i-am-officially-smarter-than-the-internet.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### I am officially smarter than the Internet

Yes, the title is just a scootch self-aggrandising, but I’m rather chuffed with myself at the moment, so please forgive me.

It all started with my phone (a regular Samsung Galaxy S3) suddenly refusing to boot, stuck at the initial splash screen (“Samsung Galaxy SIII GT-I9300”). After turning it off and on again a few times (I know my basic problem-solving strategies) and clearing the cache, I decided to start looking deeper. In contrast to pretty much every other Android debugging experience ever, I almost immediately found a useful error message in the recovery system:

E:Failed to mount /efs (Invalid Argument)

“Excellent!”, thought I. “An error message. Google will tell me how to fix this!”

Nope. The combined wisdom of the Internet, distilled from a great many poorly-spelled forum posts, unhelpful blog posts, and thoroughly pointless articles, was simple: “You’re screwed. Send it back for service.”

I tried that. Suffice it to say that I will never, ever buy anything from Kogan ever again. I have learnt my lesson. Trying to deal with their support people was an exercise in frustration, and ultimately fruitless.

In the end, I decided I’d have some fun trying to fix it myself – after all, it’s a failure at the base Linux level. I know a thing or two about troubleshooting Linux, if I do say so myself. If I really couldn’t fix it, I’d just go buy a new phone.

It turned out be relatively simple. Here’s the condensed version of my notes, in case someone wants to follow in my footsteps. If you’d like expansion, feel free to e-mail me. Note that these instructions are specifically for my Galaxy S3 (GT-I9300), but should work with some degree of adaptation on pretty much any Android phone, as far as I can determine, within the limits of the phone’s willingness to flash a custom recovery.

1. Using heimdall, flash the TeamWin recovery onto your phone (drop into “download mode” first – hold VolDown+Home+Power):

heimdall flash --recovery twrp.img

2. Boot into recovery (VolUp+Home+Power), select “Advanced -> Terminal”, and take an image of the EFS partition onto the external SD card you should have already in the phone:

dd if=/dev/block/mmcblk0p3 of=/external_sd/efs.img

3. Shutdown the phone, mount the SD card on your computer, then turn your EFS partition image into a loopback device and fsck it:

sudo losetup -f .../efs.img
sudo fsck -f /dev/loop0

With a bit of luck, the partition won’t be a complete write-off and you’ll be able to salvage the contents of the files, if not the exact filesystem structure.

Incidentally, if the filesystem was completely stuffed, you could get someone else’s EFS partition and change the IMEI and MAC addresses and you’d probably be golden, but that would quite possibly be illegal or something, so don’t do that.

4. Now comes the fun part – putting the filesystem back together. After fscking, mount the image somewhere on your computer:

mount /dev/loop0 /mnt

In my case, I had about a dozen files living in lost+found, and I figured that wasn’t a positive outcome. I did try, just in case, writing the fsck’d filesystem image back to the phone, in the hope that it just needed to mount the filesystem to boot, but no dice.

Instead, I had to find out where these lost soul^Wfiles were supposed to live. Luckily, a colleague of mine also has an S3 (the ever-so-slightly-different GT-I9300T), and he was kind enough to let me take a copy of his EFS partition, and use that as a file location template. Using a combination of file sizes, permissions/ownerships, and inode numbers (I knew the -i option to ls would come in handy someday!), I was able to put all the lost files back where they should be.

5. Unmount all those EFS filesystems, losetup -d /dev/loop0, and put the fixed up EFS partition image back onto your SD card for the return trip to the phone.

6. Now, with a filesystem image that looks reasonable, it’s time to write it back onto the phone and see what happens. Copy it onto the SD card, boot up into recovery again, get a shell, and a bit more dd:

dd if=/external_sd/efs.img of=/dev/block/mmcblk0p3

7. With a bit of luck, your phone may just boot back up now. In my case, I’d done so many other things to my phone trying to get it back up and running (including flashing custom ROMs and what have you) that I needed to flash Cyanogen, boot it, and wait at the boot screen for about 15 minutes (I shit you not, 15 minutes of “Gah is my phone going to work?!?”) before it came up and lo! I had a working phone again. And about 27 SMSes. Sigh, back to work…

So, yeah, neener-neener to the collected wisdom of the ‘tubes. I fixed my EFS partition, and in the great, grand scheme of things, it wasn’t even all that difficult. For any phone which (a) allows you to flash a custom recovery and (b) you can find another of the same model to play with, EFS corruption doesn’t necessarily mean a fight with tech support.

Incidentally, if you happen to have an S3 exhibiting this problem, but you’re not comfortable fiddling with it, I’m happy to put your EFS back together again if you pay shipping both ways. It’s about a 5 minute job now I know how to do it. E-mail me.

19 December, 2013 10:30PM by Matt Palmer (mpalmer@hezmatt.org)

## Vincent Fourmond <!-- document.write( "<a href=\"#\" id=\"http://vince-debian.blogspot.com/2013/12/rubyforge-is-dead-but-ctioga2-goes-on.html_hide\" onClick=\"exclude( 'http://vince-debian.blogspot.com/2013/12/rubyforge-is-dead-but-ctioga2-goes-on.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://vince-debian.blogspot.com/2013/12/rubyforge-is-dead-but-ctioga2-goes-on.html_show\" style=\"display:none;\" onClick=\"show( 'http://vince-debian.blogspot.com/2013/12/rubyforge-is-dead-but-ctioga2-goes-on.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Rubyforge is dead, but ctioga2 goes on...

In the light of the frequent recent downtimes of rubyforge, and thanks to the information from @copiousfreetime, I finally decided to move the hosting of ctioga2 away from rubyforge, to sourceforge. Transition went smooth, git is now the only VCS. Code is hosted at sourceforge and mirrored at github.

Work goes on on ctioga2. I've recently implemented a decent support for histograms. It is already much more powerful than the one in the old ctioga, but it is far from being feature-full. Here's a preview.

I'm slowly preparing a new release for ctioga2, that would incorporate quite some fixes since the last time, and quite a few nice features in addition to the histograms. Stay tuned !

19 December, 2013 10:25PM by Vincent Fourmond (noreply@blogger.com)

## Chris Lamb <!-- document.write( "<a href=\"#\" id=\"https://chris-lamb.co.uk/posts/quickly-switching-between-imperial-and-metric-units-strava_hide\" onClick=\"exclude( 'https://chris-lamb.co.uk/posts/quickly-switching-between-imperial-and-metric-units-strava' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://chris-lamb.co.uk/posts/quickly-switching-between-imperial-and-metric-units-strava_show\" style=\"display:none;\" onClick=\"show( 'https://chris-lamb.co.uk/posts/quickly-switching-between-imperial-and-metric-units-strava' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Quickly switching between imperial and metric units on Strava

British triathletes are quite schizophrenic about their units: not "European" enough to bike using metric units yet not "American" enough to run using their imperial counterparts.

Garmin GPS units seem happy enough to accomodate this contradiction but Strava only has a single global setting.

Switching units would normally involve visiting your settings page—inconvenient when viewing lots of run and bike pages—so I wrote a Chrome extension that toggles between the different unit types via a Strava icon in the address bar:

I considered a solution that actually converted the values displayed on the page but the illusion would always be shattered by Javascript elements which would require monkey-patching to ensure the desired unit was rendered:

Source code is available, which should also serve as a template for similar extensions.

## Joachim Breitner <!-- document.write( "<a href=\"#\" id=\"http://www.joachim-breitner.de/blog/archives/632-My-contribution-to-XKCDs-949.html_hide\" onClick=\"exclude( 'http://www.joachim-breitner.de/blog/archives/632-My-contribution-to-XKCDs-949.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://www.joachim-breitner.de/blog/archives/632-My-contribution-to-XKCDs-949.html_show\" style=\"display:none;\" onClick=\"show( 'http://www.joachim-breitner.de/blog/archives/632-My-contribution-to-XKCDs-949.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### My contribution to XKCD’s #949

Randall Munroe rightly put shame on all the geeks in the world when he pointed out that transferring files over the internet is still an unsolved problem.

I am a big fan of FileTea’s approach to transferring files, where they are streamed from browser to browser, without registration and without being stored on some central server, and where closing the browser tab reliably cleans up the transfer. But I wanted something that works from the command line, so I created a small tool called share-file that will use SSH port forwarding to serve the files from a local, embedded web server at a publicly available port, as shown in these screenshots:

It works without additional dependencies (but better with python-magic installed) and requires a publicly available SSH server configured with GatewayPorts clientspecified. For more details, see the README, and to try it out, simply fetch it with git clone git://git.nomeata.de/share-file.git.

BTW, if someone implements a command line client for FileTea, I’ll happily dump share-file for it.

19 December, 2013 11:17AM by nomeata (mail@joachim-breitner.de)

## AltOS 1.3 — TeleMega and EasyMini support

Bdale and I are pleased to announce the release of AltOS version 1.3.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a major release of AltOS as it includes support for both of our brand new flight computers, TeleMega and EasyMini.

### AltOS Firmware — New hardware, new features and fixes

Our new advanced flight computer, TeleMega, required a lot of new firmware features, including:

• 9 DoF IMU (3 axis accelerometer, 3 axis gyroscope, 3 axis compass).

• Orientation tracking using the gyroscopes (and quaternions, which are lots of fun!)

• Software FEC, both encoding and decoding.

• Four fully-programmable pyro channels, in addition to the usual apogee and main channels.

• STM32L CPU support. TeleMega needed a more powerful processor. The STM32L is a 32-bit ARM Cortex-M3 which is definitely up to the challenge.

Our new easy-to-use flight computer, EasyMini also uses a new processor, the LPC11U14, which is an ARM Cortex-M0 part.

For our existing cc1111 devices, there are some minor bug fixes for the flight software, so you should plan on re-flashing flight units at some point. However, there aren’t any incompatible changes, so you don’t have to do it all at once.

Bug fixes:

• More USB fixes for Windows.

• Turn off the cc1111 RC oscillator at startup. This may save a bit of power, and may reduce noise inside the chip a bit.

### AltosUI — Redesigned for TeleMega and EasyMini support

AltosUI has also seen quite a bit of work for the 1.3 release, but almost all of that was a massive internal restructuring necessary to support flight computers with a wide range of sensors. From the user’s perspective, it’s pretty similar with a few changes:

• Graphs can now show the raw barometric pressure

• Support for TeleMega and EasyMini, including alternate TeleMega pyro channel configuration.

• Bug fixes in how data were extracted from a flight record for graphing — sometimes values would end up getting plotted out of order, causing weird jaggy lines.

## Matthew Palmer <!-- document.write( "<a href=\"#\" id=\"http://www.hezmatt.org/~mpalmer/blog/2013/12/19/truly-nothing-is-safe.html_hide\" onClick=\"exclude( 'http://www.hezmatt.org/~mpalmer/blog/2013/12/19/truly-nothing-is-safe.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://www.hezmatt.org/~mpalmer/blog/2013/12/19/truly-nothing-is-safe.html_show\" style=\"display:none;\" onClick=\"show( 'http://www.hezmatt.org/~mpalmer/blog/2013/12/19/truly-nothing-is-safe.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Truly, nothing is safe

Quoted from a recent Debian Security Advisory:

Genkin, Shamir and Tromer discovered that RSA key material could be extracted by using the sound generated by the computer during the decryption of some chosen ciphertexts.

Side channel attacks are the ones that terrify me the most. You can cryptanalyse the algorithm and audit the implementation as much as you like, and then still disclose key material because your computer makes noise.

19 December, 2013 05:00AM by Matt Palmer (mpalmer@hezmatt.org)

# December 18, 2013

## Joey Hess <!-- document.write( "<a href=\"#\" id=\"http://joeyh.name/blog/entry/charlie_bitcoin/_hide\" onClick=\"exclude( 'http://joeyh.name/blog/entry/charlie_bitcoin/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://joeyh.name/blog/entry/charlie_bitcoin/_show\" style=\"display:none;\" onClick=\"show( 'http://joeyh.name/blog/entry/charlie_bitcoin/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### what charlie's missing about bitcoin

(Posted as a comment to Charles Stross's blog post about bitcoin)

Bitcoin is a piece of software which tries to implement a particular SFnal future. One in which the world currency is de-centralized, deflationary, all early bitcoin adopters own their own planetoids, and all visitors are automatically charged for the air they breath.

Thing is, the real world is more complicated than that. Assuming Bitcoin did manage to become an important currency, countries would naturally try to regulate it. In 30 years, by the time bitcoin mining has slowed right down, the legal system will be fully caught up to the internet.

Bitcoin tries to make its code the law (as Lessig used to say), but the law can certainly affect its code.

The law could, for example, require that bitcoin be changed to stop increasing the difficulty of mining new blocks. Then bitcoin is suddenly an inflationary currency. This would be a hard fork in the block chain, but one enforced by financial regulators. Miners would be tracked down and forced to comply. Some would perhaps go underground and run the deflationary bitcoin network on TOR hidden services. Lots of possible ways it could play out.

That's only one scenario, covering one of the many problems with Bitcoin that make Charlie hate it. So it seems to me that Bitcoin should be a gold mine for Science Fiction authors, if nothing else..

## Daniel Pocock <!-- document.write( "<a href=\"#\" id=\"http://danielpocock.com/embedding-python-multi-threaded-cpp_hide\" onClick=\"exclude( 'http://danielpocock.com/embedding-python-multi-threaded-cpp' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://danielpocock.com/embedding-python-multi-threaded-cpp_show\" style=\"display:none;\" onClick=\"show( 'http://danielpocock.com/embedding-python-multi-threaded-cpp' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Embedding Python in multi-threaded C++ applications

Embedding Python into other applications to provide a scripting mechanism is a popular practice. Ganglia can run user-supplied Python scripts for metric collection and the Blender project does it too, allowing users to develop custom tools and use script code to direct their animations.

There are various reasons people choose Python:

• Modern, object orientated style of programming
• Interpreted language (no need to compile things, just edit and run the code)
• Python has a vast array of modules providing many features such as database and network access, data processing, etc

The bottom line is that the application developer who chooses to embed Python in their existing application can benefit from the product of all this existing code multiplied by the imagination of their users.

### Enter repro

repro is the SIP proxy of the reSIProcate project. reSIProcate is an advanced SIP implementation developed in C++. repro is a multi-threaded process.

repro's most serious competitor is the Kamailio SIP proxy. Kamailio has its own bespoke scripting language that it has inherited from the SIP Express Router (SER) family of projects. repro has always been far more rigid in its capabilities than Kamailio. On the other hand, while Kamailio has given users great flexibility, it has also come at a cost: users can easily build configurations that are not valid or may not do what they really intend if they don't understand the intricacies of the SIP protocol. Here is an example of the Kamailio configuration script (from Daniel's excellent blog about building a Skype-like service in less than an hour)

Kamailio also has a wide array of plugins for things like database and LDAP access. repro only had embedded bdb and MySQL support.

Embedding Python into repro appears to be a quick way to fill many of these gaps and allow users to combine the power of the reSIProcate stack with their own custom routing logic. On the other hand, it is not simply copying the Kamailio scripting solution: rather, it provides a distinctive alternative.

### Starting the integration

Embedding Python is such a popular practice that there is even dedicated documentation on the subject. As well as looking there, I also looked over the example provided by the embedded Python module for Ganglia.

Looking over the Ganglia mod_python code I noticed a lot of boilerplate code for reference counting and other tedious activities. Given that reSIProcate is C++ code, I thought I would look for a C++ solution to this and I came across PyCXX. PyCXX is licensed under BSD-like terms similar to reSIProcate itself so it is a good fit. There is also the alternative Boost.Python API, however, reSIProcate has been built without Boost dependencies so I decided to stick with PyCXX.

I looked over the PyCXX examples and the documentation and was able to complete a first cut of the embedded Python scripting feature very quickly.

### Using PyCXX

One unusual thing I noticed about PyCXX is that the Debian package, python-cxx-dev does not provide any shared library. Instead, some uncompiled source files are provided and each project using PyCXX must compile them and link them statically itself. Here is how I do that in the Makefile.am for pyroute in repro:

AM_CXXFLAGS = -I $(top_srcdir) reproplugin_LTLIBRARIES = libpyroute.la libpyroute_la_SOURCES = PyRoutePlugin.cxx libpyroute_la_SOURCES += PyRouteWorker.cxx libpyroute_la_SOURCES +=$(PYCXX_SRCDIR)/cxxextensions.c
libpyroute_la_SOURCES += $(PYCXX_SRCDIR)/cxx_extensions.cxx libpyroute_la_SOURCES +=$(PYCXX_SRCDIR)/cxxsupport.cxx
libpyroute_la_SOURCES += $(PYCXX_SRCDIR)/../IndirectPythonInterface.cxx libpyroute_la_CPPFLAGS =$(DEPS_PYTHON_CFLAGS)
libpyroute_la_LDFLAGS = -module -avoid-version
libpyroute_la_LDFLAGS += $(DEPS_PYTHON_LIBS) EXTRA_DIST = example.py noinst_HEADERS = PyRouteWorker.hxx noinst_HEADERS += PyThreadSupport.hxx The value PYCXX_SRCDIR must be provided on the configure command line. On Debian, it is /usr/share/python2.7/CXX/Python2 ### Going multi-threaded My initial implementation simply invoked the Python method from the main routing thread of the repro SIP proxy. This meant that it would only be suitable for executing functions that complete quickly, ruling out the use of any Python scripts that talk to network servers or other slow activities. When the proxy becomes heavily loaded, it is important that it can complete many tasks asynchronously, such as forwarding chat messages between users in real-time. Therefore, it was essential to extend the solution to run the Python scripts in a pool of worker threads. At this point, I had an initial feeling that there may be danger in just calling the Python methods from some other random threads started by my own code. I went to see the manual and I came across this specific documentation about the subject. It looks quite easy, just wrap the call to the user-supplied Python code in something like this: PyGILState_STATE gstate; gstate = PyGILState_Ensure(); /* Perform Python actions here. */ result = CallSomeFunction(); /* evaluate result or handle exception */ /* Release the thread. No Python API allowed beyond this point. */ PyGILState_Release(gstate); Unfortunately, I found that this would not work and that one of two problems occur when using this code: • The thread blocks on the call to PyGILState_Ensure() • The program crashes with a segmentation fault when the call to a Python method was invoked Exactly which of these outcomes I experienced seemed to depend on whether I tried to explicitly call PyEval_ReleaseThread() from the main thread after doing the Py_Initialize() and other setup tasks. I tried various permutations of using PyGILState_Ensure()PyGILState_Release() and/or PyEval_SaveThread()/PyEval_ReleaseThread() but I always had one of the same problems. The next thing that occurred to me is that maybe PyCXX provides some framework for thread integration: I had a look through the code and couldn't find any reference to the threading functionality from the C API. I went looking for more articles and mailing list discussions and found implementation notes such as this one in Linux Journal and this wiki from the Blender developers. Most of them just appeared to be repeating what was in the manual, with a few subtle differences, but none of this provided an immediate solution. Eventually, I discovered this other blog about concurrency with embedded Python and it suggests something not highlighted in any of the other resources: calling PyThreadState_New(m_interpreterState) in each thread after it starts and before it does anything else. Combining this with the use of PyEval_SaveThread()/PyEval_ReleaseThread() fixed the problem: the use of PyThreadState_New() was not otherwise mentioned in the relevant section of the Python guide. I decided to take this solution a step further and create a convenient C++ class to encapsulate the logic, you can see this in PyThreadSupport.hxx: class PyExternalUser { public: PyExternalUser(PyInterpreterState* interpreterState) : mInterpreterState(interpreterState), mThreadState(PyThreadState_New(mInterpreterState)) {}; class Use { public: Use(PyExternalUser& user) : mUser(user) { PyEval_RestoreThread(mUser.getThreadState()); }; ~Use() { mUser.setThreadState(PyEval_SaveThread()); }; private: PyExternalUser& mUser; }; friend class Use; protected: PyThreadState* getThreadState() { return mThreadState; }; void setThreadState(PyThreadState* threadState) { mThreadState = threadState; }; private: PyInterpreterState* mInterpreterState; PyThreadState* mThreadState; }; and the way to use it is demonstrated in the PyRouteWorker class. Observe how PyExternalUser::Use is instantiated in the PyRouteWorker::process() method: when it goes out of scope (either due to a normal return, an error or an exception) the necessary call to PyEval_SaveThread() is made in the PyExternalUser::Use::~Use() destructor. ### Using other Python modules and DSO problems All of the above worked for basic Python such as this trivial example script: def on_load(): '''Do initialisation when module loads''' print 'example: on_load invoked' def provide_route(method, request_uri, headers): '''Process a request URI and return the target URI(s)''' print 'example: method = ' + method print 'example: request_uri = ' + request_uri print 'example: From = ' + headers["From"] print 'example: To = ' + headers["To"] routes = list() routes.append('sip:bob@example.org') routes.append('sip:alice@example.org') return routes However, it needs a more credible and useful test: using the python-ldap module to try and query an LDAP server appears like a good choice. Upon trying to use import ldap in the Python script, repro would refuse to load the Python script, choking on an error like this: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/ldap/__init__.py", line 22, in import _ldap ImportError: /usr/lib/python2.7/dist-packages/_ldap.so: undefined symbol: PyExc_SystemError I looked at the file _ldap.so and discovered that it is linked with the LDAP libraries but not explicitly linked to any version of the Python runtime libraries. It expects the application hosting it to provide the Python symbols globally. In my own implementation, my embedded Python encapsulation code is provide as a DSO plugin, similar to the way plugins are loaded in Ganglia or Apache. The DSO links to Python: the DSO is loaded by a dlopen() call from the main process. The main repro binary has no direct link to Python libraries. Adding RTLD_GLOBAL to the top-level dlopen() call for loading the plugin is one way to ensure the Python symbols are made available to the Python modules loaded indirectly by the Python interpreter. This solution may be suitable for applications that don't mix and match many different components. ### Doing something useful with it Now it was all working nicely, I took a boilerplate LDAP Python example and used it for making a trivial script that converts sip:user@example.org to something like sip:9001@pbx.example.org, assuming that 9001 is the telephoneNumber associated with the user@ email address in LDAP. It is surprisingly simple and easily adaptable to local requirements depending upon the local LDAP structures: import ldap from urlparse import urlparse def on_load(): '''Do initialisation when module loads''' #print 'ldap router: on_load invoked' def provide_route(method, request_uri, headers): '''Process a request URI and return the target URI(s)''' #print 'ldap router: request_uri = ' + request_uri _request_uri = urlparse(request_uri) routes = list() # Basic LDAP server parameters: server_uri = 'ldaps://ldap.example.org' base_dn = "dc=example,dc=org" # this domain will be appended to the phone numbers when creating # the target URI: phone_domain = 'pbx.example.org' # urlparse is not great for "sip:" URIs, # the user@host portion is in the 'path' element: filter = "(&(objectClass=inetOrgPerson)(mail=%s))" % _request_uri.path #print "Using filter: %s" % filter try: con = ldap.initialize(server_uri) scope = ldap.SCOPE_SUBTREE retrieve_attributes = None result_id = con.search(base_dn, scope, filter, retrieve_attributes) result_set = [] while 1: timeout = 1 result_type, result_data = con.result(result_id, 0, None) if (result_data == []): break else: if result_type == ldap.RES_SEARCH_ENTRY: result_set.append(result_data) if len(result_set) == 0: #print "No Results." return routes for i in range(len(result_set)): for entry in result_set[i]: if entry[1].has_key('telephoneNumber'): phone = entry[1]['telephoneNumber'][0] routes.append('sip:' + phone + '@' + phone_domain) except ldap.LDAPError, error_message: print "Couldn't Connect. %s " % error_message return routes ### Embedded Python opens up a world of possibilities After Ganglia 3.1.0 introduced an embedded Python scripting facility, dozens of new modules started appearing in github. Python scripting lowers the barrier for new contributors to a project and makes it much easier to fine tune free software projects to meet local requirements: hopefully we will see similar trends with the repro SIP proxy and other projects that choose Python. The code is committed here in the reSIProcate repository. These features will appear in the next beta release of reSIProcate and Debian packages will be available in unstable in a few days. 18 December, 2013 08:12PM by Daniel.Pocock ## C.J. Adams-Collier <!-- document.write( "<a href=\"#\" id=\"http://wp.colliertech.org/cj/?p=1247_hide\" onClick=\"exclude( 'http://wp.colliertech.org/cj/?p=1247' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://wp.colliertech.org/cj/?p=1247_show\" style=\"display:none;\" onClick=\"show( 'http://wp.colliertech.org/cj/?p=1247' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Maxine is now running on mariadb So back when I was working for MySQL AB as support manager for MaxDB, I created an IRC bot to help manage the #maxdb channel on Freenode. We didn’t get a lot of traffic, and Daniel De Graaf mentioned that he could use a bot to help manage some iptables factoids over on #netfilter. So I had her join. He taught her all sorts of interesting things. She stored these factoids in a MySQL database. I have just migrated from MySQL to MariaDB which I compiled from source. Here are the packages: http://www.colliertech.org/~cjac/debian/ cjac@mariadb:~$ mysql -u root -p
Welcome to the MariaDB monitor.  Commands end with ; or \g.

Copyright (c) 2000, 2013, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
15:22 < cj> maxine: iptables?
15:22 < maxine> hmmm... iptables is a generic table structure for the
definition of rulesets. Each rule within a chain consists of a
number of classifiers (iptables matches) and one optional
connected action (iptables target).

18 December, 2013 03:45PM by C.J. Adams-Collier

## Ingo Juergensmann <!-- document.write( "<a href=\"#\" id=\"http://blog.windfluechter.net/content/blog/2013/12/18/1677-debian-donation-m68k-arrived_hide\" onClick=\"exclude( 'http://blog.windfluechter.net/content/blog/2013/12/18/1677-debian-donation-m68k-arrived' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.windfluechter.net/content/blog/2013/12/18/1677-debian-donation-m68k-arrived_show\" style=\"display:none;\" onClick=\"show( 'http://blog.windfluechter.net/content/blog/2013/12/18/1677-debian-donation-m68k-arrived' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Debian donation to m68k arrived

The Debian m68k port has been entitled by the DPL to receive a donation of five memory expansion cards for the m68k autobuilders. The cards arrived two weeks ago and are now being shipped to the appropriate buildd admins. Adding those 256 MB memory expansion will have a huge effect to the m68k buildds, because most of them are Amigas that are currently running with "just" 128 MB.

The problem with those expansion cards is to make use of them. Sounds strange but this is the story behind it....

Those memory expansion cards, namely it is the BigRamPlus from Individual Computers, are Zorro III bus cards, which has some speed limitations. The Amiga memory model is best described in the Amiga Hardware Reference Manual. For Zorro III based Amigas this is described in the section "A3000 memory map", where you can see that the memory model is divided into different address space. The most important address space is the "Coprocessor Slot Expansion" space, starting at $0800 0000. This is where the memory on the CPU accelerator cards will be found and which runs at full CPU speed. The BigRamPlus, however, is located within "Zorro III Expansion" address space at$1000 0000 and will have transfer rates of about 13 MB/s. Then again there's still the motherboard expansion memory and others like Zorro II expansion memory. Unfortunately the current kernel does not support SPARSEMEM on m68k, but is using DISCONTIGMEM as Geert Uytterhoven explained. In short: we need SPARSEMEM support to easily make use of all available memory chunks that can be found. To make it a little more difficult, Amigas do use some kind of memory priority. Memory on accelerator cards usually has a priority of 40, motherboard expansion memory has a priority of, let's say, 20 and chip memory a pri of 0. This priority usually is equivalent to speed of memory. So, we want to have the kernel loaded to accelerator memory, of course.

Basically we could do that by using a memfile and define the different memory chunks in the appropriate priority list like this one:

2097152
0x08000000 67108864
0x07400000 12582912
0x05000000 268435424

Would be an easy solution, right? Except that this doesn't work out. Currently the kernel will be loaded into the first memory chunk that is defined and ignore all memory chunks before that address space. As you can see 0x07400000 and 0x05000000 would be ignored because of this. Getting confused? No problem! It will get worse! ;)

There's another method of accessing memory for Amigas: it's called z2ram and will use Zorro II as, let's say, swapping area. But maybe you guessed it: z2ram does not work for Zorro III memory (yet). So, this won't work either.

Geert suggested to use that Zorro III memory as mtd device and finally this worked out! You'll need these modules in your kernel:

CONFIG_MTD=m
CONFIG_MTD_CMDLINE_PARTS=m
CONFIG_MTD_BLKDEVS=m
CONFIG_MTD_SWAP=m
CONFIG_MTD_MAP_BANK_WIDTH_1=y
CONFIG_MTD_MAP_BANK_WIDTH_2=y
CONFIG_MTD_MAP_BANK_WIDTH_4=y
CONFIG_MTD_CFI_I1=y
CONFIG_MTD_CFI_I2=y
CONFIG_MTD_SLRAM=m
CONFIG_MTD_PHRAM=m

Then you just need to create the mtd device and configure it as swap space:

/sbin/modprobe phram phram=bigram0,0x50000000,0xfffffe0
/sbin/modprobe mtdblock
/sbin/mkswap /dev/mtdblock0
/sbin/swapon -p 5 /dev/mtdblock0

And then you're done:

# swapon -s
Filename Type Size Used Priority
/dev/sda3 partition 205932 8 1
/dev/sdb3 partition 875536 16 1
/dev/mtdblock0 partition 262136 53952 5

To make it even worse (yes, there's still room for that! ;)) you can put two memory expansion cards into one box:

# lszorro -v
00: MacroSystems USA Warp Engine 40xx [Accelerator, SCSI Host Adapter and RAM Expansion]
40000000 (512K)

01: Unknown device 0e3b:20:00
50000000 (256M)

02: Village Tronic Picasso II/II+ RAM [Graphics Card]
00200000 (2M)

03: Village Tronic Picasso II/II+ [Graphics Card]
00e90000 (64K)

04: Hydra Systems Amiganet [Ethernet Card]
00ea0000 (64K)

05: Unknown device 0e3b:20:00
60000000 (256M)

The two "Unknown device" entries are the two BigRamPlus cards. As you can see card #1 starts at 0x50000000 and card #2 starts at 0x60000000. Unfortunately the phram kernel module can be loaded twice with different start addresses, but the idea to start at 0x50000000 with a size of 512M won't work either as there seems to be a reserved 0x20 bytes a range at the beginning of each card. Anyway...

So, to make a very long and weird story short: the donated memory cards from Debian can now be used as additional and fast swap space for the buildds as long as it takes to get SPARSEMEM support working.

Thanks again for donating the money for those memory expansion cards for the good old m68k port. Once done SPARSEMEM support in the m68k will benefit not only these cards in Amigas but Ataris as well.

Kategorie:
Tags:

## Antoine Beaupré <!-- document.write( "<a href=\"#\" id=\"http://anarcat.koumbit.org/2013-12-17-password-reset-speedstream-5200-modems_hide\" onClick=\"exclude( 'http://anarcat.koumbit.org/2013-12-17-password-reset-speedstream-5200-modems' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://anarcat.koumbit.org/2013-12-17-password-reset-speedstream-5200-modems_show\" style=\"display:none;\" onClick=\"show( 'http://anarcat.koumbit.org/2013-12-17-password-reset-speedstream-5200-modems' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Password reset of Speedstream 5200 modems

After an extended downtime on my ADSL uplink at home during a nice snowstorm, I got curious and wanted to find out the SNR (Signal to Noise Ratio) I had on my line, to try to explain why it was down.

It turned out it was more complicated than I thought, because the modem was locked down by Bell Canada (even though I am not a customer). There was a Windows utility, but since I haven't been running this pathetic operating system in years, I had to find an alternative.

Fortunately, the author of that utility posted a description of the packet format used for the password reset, and here I was trying to send raw ethernet frames using Linux (for testing and, why not!) and FreeBSD (because I use FreeBSD for routing, because I dislike iptables).

The result is this somewhat poorly written C program, available in this git repository. It uses raw sockets in Linux but the BPF (Berkeley Packet Filter) in FreeBSD, as their interface is a little broken for raw sockets.

Enjoy.

18 December, 2013 04:19AM by anarcat

## Daniel Kahn Gillmor <!-- document.write( "<a href=\"#\" id=\"http://debian-administration.org/users/dkg/weblog/106_hide\" onClick=\"exclude( 'http://debian-administration.org/users/dkg/weblog/106' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://debian-administration.org/users/dkg/weblog/106_show\" style=\"display:none;\" onClick=\"show( 'http://debian-administration.org/users/dkg/weblog/106' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### automatically have uscan check signatures

If you maintain software in debian, one of your regular maintenance tasks is checking for new upstream versions, reviewing them, and preparing them for debian if appropriate. One of those steps is often to verify the cryptographic signature on the upstream source archive.

At the moment, most maintainers do the cryptographic check manually, or maybe even don't bother to do it at all. For the common case of detached OpenPGP signatures, though, uscan can now do it for you automatically (as of devscripts version 2.13.3). You just need to tell uscan what keys you expect upstream to be signing with, and how to find the detached signature.

So, for example, Damien Miller recently announced his new key that he will be using to sign OpenSSH releases (his new key has OpenPGP fingerprint 59C2 118E D206 D927 E667 EBE3 D3E5 F56B 6D92 0D30 -- you can verify it has been cross-signed by his older key, and his older key has been revoked with the indication that it was superceded by this one). Having done a reasonable verification of Damien's key, if i was the openssh package maintainer, i'd do the following:

cd ~/src/openssh/
gpg --export '59C2 118E D206 D927 E667  EBE3 D3E5 F56B 6D92 0D30' >> debian/upstream-signing-key.pgp
And then upon noticing that the signature files are named with a simple .asc suffix on the upstream distribution site, we can use the following pgpsigurlmangle option in debian/watch:
version=3
opts=pgpsigurlmangle=s/$/.asc/ ftp://ftp.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-(.*)\.tar\.gz I've filed this specific example as debian bug #732441. If you notice a package with upstream signatures that aren't currently being checked by uscan (or if you are upstream, you sign your packages, and you want your debian maintainer to verify them), you can file similar bugs. Or, if you maintain a package for debian, you can just fix up your package so that this check is there on the next upload. If you maintain a package whose upstream doesn't sign their releases, ask them why not -- wouldn't upstream prefer that their downstream users can verify that each release wasn't tampered with? Of course, none of these checks take the the place of the real work of a debian package maintainer: reviewing the code and the changelogs, thinking about what changes have happened, and how they fit into the broader distribution. But it helps to automate one of the basic safeguards we should all be using. Let's eliminate the possibility that the file was tampered with at the upstream distribution mirror or while in transit over the network. That way, the maintainer's time and energy can be spent where they're more needed. 18 December, 2013 03:15AM by Daniel Kahn Gillmor (dkg) ## Russ Allbery <!-- document.write( "<a href=\"#\" id=\"http://www.eyrie.org/~eagle/journal/2013-12/004.html_hide\" onClick=\"exclude( 'http://www.eyrie.org/~eagle/journal/2013-12/004.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://www.eyrie.org/~eagle/journal/2013-12/004.html_show\" style=\"display:none;\" onClick=\"show( 'http://www.eyrie.org/~eagle/journal/2013-12/004.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### lbcd 3.4.2 lbcd is the daemon run on individual systems that participate in DNS-based load-balanced pools using lbnamed. This is a portability release that (finally) switches the default API for user login information over to getutxent from getutent (required for Mac OS X) and enables building on FreeBSD and Debian GNU/kFreeBSD systems. Note that lbcd will only work on FreeBSD systems if the Linux-compatible /proc file system is mounted, but this appears to be a common configuration. You can get the latest release from the lbcd distribution page. ## Steinar H. Gunderson <!-- document.write( "<a href=\"#\" id=\"http://blog.sesse.net/blog/tech/2013-12-18-01-24_wininet_ii.html_hide\" onClick=\"exclude( 'http://blog.sesse.net/blog/tech/2013-12-18-01-24_wininet_ii.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.sesse.net/blog/tech/2013-12-18-01-24_wininet_ii.html_show\" style=\"display:none;\" onClick=\"show( 'http://blog.sesse.net/blog/tech/2013-12-18-01-24_wininet_ii.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Wininet II A follow-up from yesterday: Developing for Windows in WINE is consistently less surprising than developing for Windows on Windows. Maybe UNIX people just think more like I do. Example: In a small hobby project of mine, I use WinInet (the FTP/HTTP/Gopher layer also used by IE, as far as I understand) in asynchronous mode, where the “please download this file” function doesn't block, but instead you receive callbacks. Eventually, after a lot of them, you get INTERNET_STATUS_REQUEST_COMPLETE, which means “this file is now done, lucky you”. (At that point, you have to go through a lot of semi-arcane error handling, but that's a different story.) So, you want the contents of the thing you just downloaded? Well, now you need to call InternetReadFile(), which is sort of like fread(), except it might sometimes not be able to chunk up the response for you and can return ”I need a bigger buffer plz” (granted, these are in obscure cases). But hey, you're in async mode, right? That means that Windows is seemingly in its right to not be done even though it just called you with the complete callback! So InternetReadFile() happily returns ERROR_IO_PENDING, and microseconds later, as the data actually gets reads, the callback is called anew—from another thread! Of course with INTERNET_STATUS_REQUEST_COMPLETE again. “Hey, dude, yeah, I'm really complete this time. Honest.” Of course, this doesn't happen if you sleep a bit before InternetReadFile(), which makes this fun to debug. And of course, none of this happens in WINE. You just get your file, end of story. And no, there's no way to do a request that's asynchronous on HTTP but synchronous on InternetReadFile(). You're sync or async. Choose one. # December 17, 2013 ## Jonathan McCrohan <!-- document.write( "<a href=\"#\" id=\"http://dereenigne.org/debian/debian-ireland-user-group-festive-drinks_hide\" onClick=\"exclude( 'http://dereenigne.org/debian/debian-ireland-user-group-festive-drinks' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://dereenigne.org/debian/debian-ireland-user-group-festive-drinks_show\" style=\"display:none;\" onClick=\"show( 'http://dereenigne.org/debian/debian-ireland-user-group-festive-drinks' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Debian Ireland User Group Meetup: Festive Drinks The Debian Ireland User Group will be meeting for festive drinks this Thursday, 19th Dec 2013, at 20:00 in The Long Stone, Townsend Street, Dublin 2. All welcome. For more information, please see the mailing list or contact us via IRC. 17 December, 2013 11:33PM by jmccrohan ## C.J. Adams-Collier <!-- document.write( "<a href=\"#\" id=\"http://wp.colliertech.org/cj/?p=1235_hide\" onClick=\"exclude( 'http://wp.colliertech.org/cj/?p=1235' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://wp.colliertech.org/cj/?p=1235_show\" style=\"display:none;\" onClick=\"show( 'http://wp.colliertech.org/cj/?p=1235' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Gotta’ get ‘em all. And now I have them all. Maybe I can reduce the load on my wan pipe by setting up a mirror for the island. 17 December, 2013 07:46PM by C.J. Adams-Collier ## Richard Hartmann <!-- document.write( "<a href=\"#\" id=\"http://richardhartmann.de/blog/posts/2013/12/17-Chilling_effects/_hide\" onClick=\"exclude( 'http://richardhartmann.de/blog/posts/2013/12/17-Chilling_effects/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://richardhartmann.de/blog/posts/2013/12/17-Chilling_effects/_show\" style=\"display:none;\" onClick=\"show( 'http://richardhartmann.de/blog/posts/2013/12/17-Chilling_effects/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Chilling effects When a porn joke becomes a polictical statement. And when you think for several minutes if you want to write a blog post with this title as it's quite obviously a trigger word. 17 December, 2013 11:50AM by Richard 'RichiH' Hartmann ## Joey Hess <!-- document.write( "<a href=\"#\" id=\"http://joeyh.name/blog/entry/completely_linux_distribution-independent_packaging/_hide\" onClick=\"exclude( 'http://joeyh.name/blog/entry/completely_linux_distribution-independent_packaging/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://joeyh.name/blog/entry/completely_linux_distribution-independent_packaging/_show\" style=\"display:none;\" onClick=\"show( 'http://joeyh.name/blog/entry/completely_linux_distribution-independent_packaging/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### completely linux distribution-independent packaging Sometimes it makes sense to ship a program to linux users in ready-to-run form that will work no matter what distribution they are using. This is hard. Often a commerical linux game will bundle up a few of the more problimatic libraries, and ship a dynamic executable that still depends on other system libaries. These days they're building and shipping entire Debian derivatives instead, to avoid needing to deal with that. There have been a few efforts to provide so-called one click install package systems that AFAIK, have not been widely used. I don't know if they generally solved the problem. More modern appoaches seem to be things like docker, which move the application bundle into a containerized environment. I have not looked at these, but so far it does not seem to have spread widely enough to be a practical choice if you're wanting to provide something that will work for a majority of linux users. So, I'm surprised that I seem to have managed to solve this problem using nothing more than some ugly shell scripts. My standalone tarballs of git-annex now seem fairly good at running on a very wide variety of systems. For example, I unpacked the tarball into the Debian-Installer initramfs and git-annex could run there. I can delete all of /usr and it keeps working! All it needs is a basic sh, which even busybox provides. Looks likely that the new armel standalone tarball of git-annex will soon be working on embedded systems as odd as the Synology NAS, and it's already been verified to work on Raspbian. (I'm curious if it would work on Android, but that might be a stretch.) Currently these tarballs are built for a specific architecture, but there's no particular reason a single one couldn't combine binaries built for each supported architecture. ## technical details The main trick is to ship a copy of ld-linux.so, as well as all the glibc libraries and associated files, and of course every other library and file the application needs. Shipping ld-linux.so lets a shell script wrapper be made around each binary, that runs ld-linux.so and passes it the library directories to search. This way the binary can be run, bypassing the system's own dynamic linker (which might not like it) and using the included glibc. For example a shell script that runs the git binary from the bundle: exec "$GIT_ANNEX_LINKER" --library-path "$GIT_ANNEX_LD_LIBRARY_PATH" "$GIT_ANNEX_SHIMMED/git/git" "$@" I have to set quite a lot of environment variables, to avoid using any files from the system and instead use ones from my tarball. One important one is GCONV_PATH. Note that LD_LIBRARY_PATH does not have to be set, and this is nice because it allows running a few programs from the host system, such as its web browser. ## worse is better Of course I'll take a proper distribution package anytime over this. Still, it seems to work quite well, in all the horrible cases that require it. ## Steinar H. Gunderson <!-- document.write( "<a href=\"#\" id=\"http://blog.sesse.net/blog/tech/2013-12-17-01-15_wininet.html_hide\" onClick=\"exclude( 'http://blog.sesse.net/blog/tech/2013-12-17-01-15_wininet.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.sesse.net/blog/tech/2013-12-17-01-15_wininet.html_show\" style=\"display:none;\" onClick=\"show( 'http://blog.sesse.net/blog/tech/2013-12-17-01-15_wininet.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Wininet From wininet.h: #define ERROR_INTERNET_INSERT_CDROM (INTERNET_ERROR_BASE + 53) I suppose that's... no, wait. It doesn't really make any sense at all. # December 16, 2013 ## Russ Allbery <!-- document.write( "<a href=\"#\" id=\"http://www.eyrie.org/~eagle/journal/2013-12/003.html_hide\" onClick=\"exclude( 'http://www.eyrie.org/~eagle/journal/2013-12/003.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://www.eyrie.org/~eagle/journal/2013-12/003.html_show\" style=\"display:none;\" onClick=\"show( 'http://www.eyrie.org/~eagle/journal/2013-12/003.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### krb5-strength 2.2 krb5-strength provides a plugin and external password checking program implementation of password strength checking for Kerberos KDCs. Management at Stanford has decided that we want to impose different character class restrictions by length, with longer passwords having fewer required character classes. This release therefore adds more comprehensive character class requirement support, including the ability to set varying requirements based on the length of the password. This release also improves the cdbmake-wordlist utility, adding support for removing words longer than a maximum length and filtering out words that match a user-provided regular expression. It can also be run in filter mode to generate a new wordlist instead of a CDB file. Finally, a file descriptor and memory leak in the embedded version of CrackLib has been fixed. (This was already fixed in the regular CrackLib release.) You can get the latest version from the krb5-strength distribution page. ## Steinar H. Gunderson <!-- document.write( "<a href=\"#\" id=\"http://blog.sesse.net/blog/tech/2013-12-16-23-08_movit_talk.html_hide\" onClick=\"exclude( 'http://blog.sesse.net/blog/tech/2013-12-16-23-08_movit_talk.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.sesse.net/blog/tech/2013-12-16-23-08_movit_talk.html_show\" style=\"display:none;\" onClick=\"show( 'http://blog.sesse.net/blog/tech/2013-12-16-23-08_movit_talk.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Movit talk I've sent in my (belated) application to FOSDEM 2014 about Movit, my video filter library. So with a bit of luck, you might find me in Belgium at the time. :-) ## Erich Schubert <!-- document.write( "<a href=\"#\" id=\"http://www.vitavonni.de/blog/201312/2013121601-java-hotspot-compiler---a-heavily-underappreciated-technology.html_hide\" onClick=\"exclude( 'http://www.vitavonni.de/blog/201312/2013121601-java-hotspot-compiler---a-heavily-underappreciated-technology.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://www.vitavonni.de/blog/201312/2013121601-java-hotspot-compiler---a-heavily-underappreciated-technology.html_show\" style=\"display:none;\" onClick=\"show( 'http://www.vitavonni.de/blog/201312/2013121601-java-hotspot-compiler---a-heavily-underappreciated-technology.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Java Hotspot Compiler - a heavily underappreciated technology When I had my first contacts with Java, probably around Java 1.1 or Java 1.2, it felt all clumsy and slow. And this is still the reputation that Java has to many developers: bloated source code and slow performance. The last years I've worked a lot with Java; it would not have been my personal first choice, but as this is usually the language the students know best, it was the best choice for this project, the data mining framework ELKI. I've learned a lot on Java since, also on debugging and optimizing Java code. ELKI contains a number of tasks that require a good number chrunching performance; something where Java particularly had the reputation of being slow. I must say, this is not entirely fair. Sure, the pure matrix multiplication performance of Java is not up to Fortran (BLAS libraries are usually implemented in Fortran, and many tools such as R or NumPy will use them for the heavy lifting). But there are other tasks than matrix multiplication, too! There is a number of things where Java could be improved a lot. Some of this will be coming with Java 8, others is still missing. I'd particularly like to see native BLAS support and multi-valued on-stack returns (to allow intrinsic sincos, for example). In this post, I want to emphasize that usually the Hotspot compiler does an excellent job. A few years ago, I have always been laughing at those that claimed "Java code can even be faster than C code"; because the Java JVM is written in C. Having had a deeper look at what the hotspot compiler does, I'm now saying: I'm not surprised that quite often, reasonably good Java code outperforms reasonably good C code. In fact, I'd love to see a "hotspot" optimizer for C. So what is it what makes Hotspot so fast? In my opinion, the key ingredient to hotspot performance is aggressive inlining. And this is exactly why "reasonably well written" Java code can be faster than C code written at a similar level. Let me explain this at an example. Assuming we want to compute a pariwise distance matrix; but we want the code to be able to support arbitrary distance functions. The code will roughly look like this (not heavily optimized): for (int i = 0; i < size; i++) { for (int j = i + 1; j < size; j++) { matrix[i][j] = computeDistance(data[i], data[j]); } } In C, if you want to be able to choose computeDistance at runtime, you would likely make it a function pointer, or in C++ use e.g. boost::function or a virtual method. In Java, you would use an interface method instead, i.e. distanceFunction.distance(). In C, your compiler will most likely emit a jmp *%eax instruction to jump to the method to compute the distance; with virtual methods in C++, it would load the target method from the vtable and then jmp there. Technically, it will likely be a "register-indirect absolute jump". Java will, however, try to inline this code at runtime, i.e. it will often insert the actual distance function used at the location of this call. Why does this make a difference? CPUs have become quite good at speculative execution, prefetching and caching. Yet, it can still pay off to save those jmps as far as I can tell; and if it is just to allow the CPU to apply these techniques to predict another branch better. But there is also a second effect: the hotspot compiler will be optimizing the inlined version of the code, whereas the C compiler has to optimize the two functions independently (as it cannot know they will be only used in this combination). Hotspot can be quite aggressive there. It will even inline and optimize when it is not 100% sure that these assumptions are correct. It will just add simple tests (e.g. adding some type checks) and jump back to the interpreter/compiler when these assumptions fail and then reoptimize again. You can see the inlining effect in Java when you use the -Xcomp flag, telling the Java VM to compile everything at load time. It cannot do as much speculative inlining there, as it does not know which method will be called and which class will be seen. Instead, it will have to compile the code using virtual method invocations, just like C++ would use for executing this. Except that in Java, every single method will be virtual (in C++, you have to be explicit). You will likely see a substantial performance drop when using this flag, it is not recommended to use. Instead, let hotspot perform its inlining magic. It will inline a lot by default - in particular tiny methods such as getters and setters. I'd love to see something similar in C or C++. There are some optimizations that can only be done at runtime, not at compile time. Maybe not even at linking time; but only with runtime type information, and that may also change over time (e.g. the user first computes a distance matrix for Euclidean distance, then for Manhattan distance). Don't get me wrong. I'm not saying Java is perfect. There are a lot of common mistakes, such as using java.util.Collections for primitive types, which comes at a massive memory cost and garbage collection overhead. The first thing to debug when optimizing Java applications is to check for memory usage overhead. But all in all, good Java code can indeed perform well, and may even outperform C code, due to the inlining optimization I just discussed; in particular on large projects where you cannot fine-tune inlining in C anymore. Sometimes, Hotspot may also fail. Which is largely why I've been investigating these issues recently. In ELKI 0.6.0 I'm facing a severe performance regression with linear scans (which is actually the simpler codepath, not using indexes but using a simple loop as seen above). I had this with 0.5.0 before, but back then I was able to revert back to an earlier version that still performed good (even though the code was much more redundant). This time, I would have had to revert a larger refactoring that I wanted to keep, unfortunately. Because the regression was quite severe - from 600 seconds to 1500-2500 seconds (still clearly better than -Xcomp) - I first assumed I was facing an actual programming bug. Careful inspection down to the assembler code produced by the hotspot VM did not reveal any such error. Then I tried Java 8, and the regression was gone. So apparently, it is not a programming error, but Java 7 failed at optimizing it remotely as good as it did with the previous ELKI version! If you are an Java guru, interested at tracking down this regression, feel free to contact me. It's in an open source project, ELKI. I'd be happy to have good performance even for linear-scans, and Java 7. But I don't want to waste any more hours on this, but instead plan to move on to Java 8 for other reasons (lambda expressions, which will greatly reduce the amount of glue coded needed), too. Plus, Java 8 is faster in my benchmarks. ## Dirk Eddelbuettel <!-- document.write( "<a href=\"#\" id=\"http://dirk.eddelbuettel.com/blog/2013/12/16#rprotobuf_0.3.2_hide\" onClick=\"exclude( 'http://dirk.eddelbuettel.com/blog/2013/12/16#rprotobuf_0.3.2' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://dirk.eddelbuettel.com/blog/2013/12/16#rprotobuf_0.3.2_show\" style=\"display:none;\" onClick=\"show( 'http://dirk.eddelbuettel.com/blog/2013/12/16#rprotobuf_0.3.2' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### RProtoBuf 0.3.2 A new version 0.3.2 of RProtoBuf, is now on CRAN. RProtoBuf provides GNU R bindings for the Google Protobuf data encoding library used and released by Google. As for the last few releases, Murray took charge of most changes. The NEWS file entry follows: #### Changes in RProtoBuf version 0.3.2 (2013-12-15) • Fixed a bug that erroneously prevented users from setting raw byte fields in protocol buffers under certain circumstances. • Give a user friendly error message when seting an extension to a message of the wrong type instead of causing a C++ check failure that terminates the Rsession. • Change object table lookup slightly to allow users to use the <<- operator in code using RProtoBuf without hitting a stop() error in the lookup routine. • Add missing enum_type method and improve show method for EnumValueDescriptors. • Improve documentation and tests for all of the above. • Rewrote tests/ script calling RUnit tests CRANberries also provides a diff to the previous release 0.3.1. More information is at the RProtoBuf page which has a draft package vignette, a 'quick' overview vignette and a unit test summary vignette. Questions, comments etc should go to the rprotobuf mailing list off the RProtoBuf page at R-Forge. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. ## Craig Small <!-- document.write( "<a href=\"#\" id=\"https://enc.com.au/2013/12/16/debians-procps-3-3-9/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=debians-procps-3-3-9_hide\" onClick=\"exclude( 'https://enc.com.au/2013/12/16/debians-procps-3-3-9/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=debians-procps-3-3-9' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://enc.com.au/2013/12/16/debians-procps-3-3-9/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=debians-procps-3-3-9_show\" style=\"display:none;\" onClick=\"show( 'https://enc.com.au/2013/12/16/debians-procps-3-3-9/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=debians-procps-3-3-9' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Debian’s procps 3.3.9 While the upstream procps which was released last week has a new pidof, the Debian package will continue to not have that binary and the Debian sysvint-utils package will continue to have that file. That stops any messy procps splits and putting one part into Essential etc. This may mean that one distributions pidof doesn’t quite work like anothers, but that has been like that already; which is why when I discussed the change as upstream I wondered where they found some of those flags I don’t have. 16 December, 2013 11:53AM by Craig ## Craig Sanders <!-- document.write( "<a href=\"#\" id=\"http://blog.taz.net.au/2013/12/16/shopping-online-whinge-of-the-day/_hide\" onClick=\"exclude( 'http://blog.taz.net.au/2013/12/16/shopping-online-whinge-of-the-day/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.taz.net.au/2013/12/16/shopping-online-whinge-of-the-day/_show\" style=\"display:none;\" onClick=\"show( 'http://blog.taz.net.au/2013/12/16/shopping-online-whinge-of-the-day/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### shopping online – whinge of the day There are numerous benefits to shopping online – you can order stuff and pay for it from the comfort of your own home and it’ll arrive on your doorstep in just a few days. wonderful, and so many words have been written about the benefits over the years that there’s no need to belabour the point, But there are some things about it that really suck and make me NOT want to order stuff online – most of the time my distaste for the privacy and delivery problems is enough to dissauade me from buying anything, helped by the fact that I live a fairly non-consumerist lifestyle (and for a geek, I have an almost non-existent gadget-fetish) but occasionally I want or need something that isn’t available locally or which my ill-health makes impractical to buy in person (i’ve been effectively housebound or in hospital for most of the year) so here are the things that piss me off most about shopping online. 1. Loss of anonymity and privacy. You can walk into any shop and pay cash for whatever you want, they don’t need to know who you are or where you live, and they don’t get to “accidentally” add you to a mailing list against your express wishes – “oh, we didn’t realise that when you said DO NOT SPAM ME, you actually meant DO NOT SPAM ME…we’ll just keep spamming you for another six months until we get around to processing your removal ‘request’”. partial solution: use a different email address with each online shop. boycott the shop and destroy the address if they spam. what sucks is that this is actually necessary. it doesn’t matter how loudly I say “DO NOT SPAM ME. DO NOT ADD ME TO ANY MAILING LISTS. DO NOT ASK ME TO COMPLETE A SURVEY. DO NOT CONTACT ME FOR ANY REASON NOT DIRECTLY RELATED TO DELIVERING MY ORDER. MY PERSONAL INFORMATION IS PROVIDED SOLELY SO THAT YOU CAN SHIP MY ORDER TO ME AND MAY NOT BE USED FOR ANY OTHER PURPOSE. FAILURE TO RESPECT MY PRIVACY WILL RESULT IN IMMEDIATE AND PERMANENT BOYCOTT” (yes, that IS the text I use in the comments/instructions field of every order I place), marketing vermin will decide that I really want their spam after all because their spam is super interesting and important and isn’t really spam at all. 2. nosy demands for unneccessary information. OK, they need your addresss to ship your parcel…but they don’t need your phone number, and they don’t need to tie your order to your facebook or other bullshit social-spam account. most online order forms have phone number as a required field. Fortunately, they usually accept 0000000 or some other bogus non-phone number as a “valid” number….if not, i give them their own phone number. 3. the thing that really shits me the most about online shopping is that both couriers and australia post suck. the lazy fuckers usually don’t even bother attempting to deliver to residential addressess. at best, they just stick a “non-delivery” card in your letterbox without even attempting to knock on the door…..and sometimes they don’t even bother doing that. the delivery driver just files a bogus “failed to deliver” or “recipient refused delivery” with their office. i’ve made four online purchases in the last month and a half. only one of them was delivered correctly to my door, even though I was sitting at my desk in the front room of the house, a few feet from my front door, and regularly monitoring the parcel’s tracking web page so I knew it was arriving. parcel 1 (Nov 16): DHL from NZ to AU. delivered perfectly. delivered in about two days for$17. order value: around $35. parcel 2 (Nov 20): Aust Post from WA to Vic, “Express Post” (costs a few bucks more than standard post). order value: around$80. lazy driver didn’t bother to attempt delivery. left card in letter box. i had to pick it up 2 days later from a post office the next suburb over (it wasn’t there yet on the first attempt to pick up, the next day). Aust Post “offered” to maybe put me in touch with their re-delivery people who might possibly be able to re-deliver the parcel next week sometime maybe, if they felt like it. knowing that I had another hospital appt the next week, I declined (if i hadn’t, it would have been an absolute certainty that they would try to deliver it while i was out).

parcel 3 (nov 29): DHL from NZ to AU. order value: around $85. lazy courier didn’t even bother to leave a card, just logged it as “Recipient refused delivery”. This “refused delivery” lie infuriated me when i saw it on the tracking page so i phoned and yelled at them a lot. DHL delivered it later the same day. if they hadn’t I would have had to get myself over to the other side of town during business hours and pick it up from their South Melbourne depot. parcel 4 (today, Dec 16): Aust Post from NSW to Vic, “Express Post”. order value: around$160. lazy driver didn’t even bother to leave a card, just logged it as “Attempted delivery – being carded to Australia Post outlet”. Complete bullshit. There was no delivery attempt, not even a card in the letterbox, let alone a knock on the door with my parcel. Rang Aust Post on 8847 9045 (a phone number curiously absent from their tracking page, but which I had written down after their failure to deliver on Nov 20) and complained. they claim that they will try to deliver it today. i’ll believe it when I see it.

UPDATE @4.30pm: no, it’s not going to be delivered today. they say they might choose to deliver it tomorrow or sometime in the future, if the delivery center manager feels like it. i told them that this is not an option – my parcel WILL be delivered, it is what they were paid to do so it is what they ARE going to do. btw, the tracking page says that they “attempted delivery” at 12.15 today (they didn’t – not even a card in the letterbox), but the tracking page wasn’t updated with that until about 2.30pm. by a complete non-coincidence their delivery center allegedly closes at 2pm, so it’s impossible to complain about non-delivery in time for the complaint to do any good.

Score so far: DHL 50% delivery rate. Aust Post: 100% failure rate. Annoyance Rate: extremely high. Desire to repeat the experience: non-existent.

The Australia Post delivery problems are particularly annoying. I’ve always thought of the network of local post offices to be a huge advantage for parcel delivery – if i’m not home, my parcel will go to a local post office and I can pick it up from there. And this is a great advantage. I used to take advantage of it when I was well enough to work. In fact, I’d insist on delivery by Aust. Post rather than by a courier for exactly this reason, a local post office is far more convenient than some courier’s depot on the other side of town or out in some far outer-suburban hellhole. But when it’s used as an excuse to not even bother attempting delivery, it sucks. I’m not well, and I should have been lying in bed reading or sleeping today, not wasting what little energy I have waiting for a parcel that never arrived…and I certainly shouldn’t have to phone Aust Post or DHL to yell at them for failing to do the job that they have been paid to do.

shopping online – whinge of the day is a post from: Errata

# December 15, 2013

## Alessandro Ghedini <!-- document.write( "<a href=\"#\" id=\"http://blog.ghedini.me/2013/12/16/building-debian-packages-using-linux-namespaces.html_hide\" onClick=\"exclude( 'http://blog.ghedini.me/2013/12/16/building-debian-packages-using-linux-namespaces.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.ghedini.me/2013/12/16/building-debian-packages-using-linux-namespaces.html_show\" style=\"display:none;\" onClick=\"show( 'http://blog.ghedini.me/2013/12/16/building-debian-packages-using-linux-namespaces.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Building Debian packages using Linux namespaces

In the past few days I have been messing around with Linux namespaces, and developed a little tool (pflask) that automates the creation of simple Linux containers based on them (a sort of chroot(8) on steroids if you will).

While the whole raison d'être behind this project was "just because", and many more mature solutions exist, I decided that it'd be nice to find an actual use case for this (otherwise I tend to lose interest pretty quickly) so I wrote a lil (and rather dumb) pbuilder clone that uses pflask instead of chroot.

The nice thing about pflask is that, differently from e.g. LXC, it doesn't need any pre-configuration and can be used directly on a vanilla debootstrap(8)ed Debian system:

$sudo mkdir -p /var/cache/pflask$ sudo debootstrap --variant=buildd $DIST /var/cache/pflask/base-$DIST-$ARCH Where$DIST and $ARCH are e.g. unstable and amd64. Once that's done just run pflask-debuild on the package sources:$ apt-get source somepackage
$cd somepackage-XYX$ pflask-debuild

The script will take care of creating a new container, chroot(2)ing into it, installing all the required dependencies, building and signing the package (it also runs lintian!).

The main difference from pbuilder is that pflask will mount a copy-on-write filesystem (using AuFS) on the / of the container so that any modification (e.g. installation of packages) can be easily discarded once the container terminates (similarly to what cowbuilder(8) does, modulo the hardlinks hack).

Additionally, thanks to the mount namespace created inside the container, all of this will be isolated from the host system and other containers, so that multiple packages can be built simultaneously on the same base debootstrapped directory.

Another possibility would be that of disabling the network inside the container using a network namespace, in order to prevent the package build system from downloading stuff from Internet while at the same time maintaining the network active on the host system, but I haven't done any experiment in this direction yet.

Note though that all of this is rather crude and experimental, but as a little hack it seems to work rather well (YMMV).

## Chris Lamb <!-- document.write( "<a href=\"#\" id=\"https://chris-lamb.co.uk/posts/developing-responsive-websites-using-awesome-window-manager_hide\" onClick=\"exclude( 'https://chris-lamb.co.uk/posts/developing-responsive-websites-using-awesome-window-manager' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://chris-lamb.co.uk/posts/developing-responsive-websites-using-awesome-window-manager_show\" style=\"display:none;\" onClick=\"show( 'https://chris-lamb.co.uk/posts/developing-responsive-websites-using-awesome-window-manager' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Developing responsive websites using the Awesome window manager

Responsive web design is a techinique for offering the optimal user experience across different browser devices without the overhead of separate "mobile" sites or native applications.

Whilst there are hundreds of browser plugins that can make it easier to test responsive websites by resizing your browser window, this can also be achieved with a scriptable window manager such as Awesome:

resize = function(width, height, text)
return function(c)
local w = capi.screen[c.screen].workarea
local geo = c:geometry()

awful.client.floating.set(c, true)
-- Float before raising
c:raise()
naughty.notify({ text = text, timeout = 0.5 })

c:geometry({
x = geo.x,
-- Constrain Y to workspace or it can start too high
y = (w.y > geo["y"]) and w.y or geo["y"],
width = width,
height = height,
})
end
end

resize_reset = function (c)
awful.client.floating.set(c, false)
end

clientkeys = awful.util.table.join(
-- [..]
awful.key({ modkey }, "1", resize( 460, 650, "Extra small")),
awful.key({ modkey }, "2", resize( 780, 750, "Small")),
awful.key({ modkey }, "3", resize(1024, 800, "Medium")),
awful.key({ modkey }, "4", resize(1250, 850, "Large")),
awful.key({ modkey }, "5", resize_reset)
)

Alt+{1,2,3,4} will then float and resize the current window to various common device sizes (mine are based on the Bootstrap 3 breakpoints) and Alt+5 will reset it.

Not being a plugin has many advantages: not only is it a browser-agnostic solution and one can use keyboard shortcuts not available to plugin developers, it also feels more aesthetically pleasing to solve a problem at the correct abstraction layer.

## Daniel Pocock <!-- document.write( "<a href=\"#\" id=\"http://danielpocock.com/a-bias-against-bitcoin_hide\" onClick=\"exclude( 'http://danielpocock.com/a-bias-against-bitcoin' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://danielpocock.com/a-bias-against-bitcoin_show\" style=\"display:none;\" onClick=\"show( 'http://danielpocock.com/a-bias-against-bitcoin' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### A bias against Bitcoin?

A statement from the European Banking Authority is warning people about the risks of using Bitcoin for payment and investment/speculation.

The press notes that the EBA paper stops short of telling people not to use the digital currency.

Many of the points made in the warning appear to be well founded on the facts: however, exactly the same points could be made about everything from gift vouchers to bullion. It is not clear why Bitcoin should be singled out for special attention like this.

Yes, gold and silver bullion are not issued by any central authority. Bullion values rise and fall many times in each day. When bullion values dipped earlier in 2013, some short term speculators sold at a loss: but for every scared seller, there was also somebody else willing to buy. Has anybody seen any stories of disgruntled investors throwing their gold out with the rubbish when the market moved?

A more interesting example is that of the gift vouchers offered by many chains of retail stores. With many stores going through restructuring and liquidation in recent years, customers holding the vouchers have been left high and dry. Unlike temporary dips in bullion or Bitcoin values, the vouchers have often become truly worthless. The story has been repeated again and again around the world, including HMV in the UK and Mothercare in Australia.

### Community sense

When retailers enter bankruptcy, it is usually at the instigation of a single person such as a prominent shareholder or the lender. Bitcoin and bullion, on the other hand, are arguably much harder for any single person or small group to pull the plug on so dramatically. Their value depends on their widespread acceptance around the globe rather than any single point of failure.

### Bitcoin's future

I'm yet to see any evidence that Bitcoin is a long term store of value. Even the most knowledgeable proponents of Bitcoin agree that it is not technically perfect, due to the lack of true anonymity. When a truly anonymous solution is found, it will be interesting to see if the industry migrates away from Bitcoin.

On the other hand, the threat of a truly anonymous community cryptocurrency is a scary thing for governments in the surveillance age. It could be argued that governments may become active players in the Bitcoin market in order to keep more anonymous solutions from gaining market share. For example, if a government was to accept remittence of taxes by Bitcoin or if smaller countries were to use Bitcoin to facilitate foreign exchange (bypassing the use of US dollars) this would make the currency far more credible for other mainstream purposes.

15 December, 2013 04:41PM by Daniel.Pocock

## Russell Coker <!-- document.write( "<a href=\"#\" id=\"http://etbe.coker.com.au/2013/12/15/poweredge-t110/_hide\" onClick=\"exclude( 'http://etbe.coker.com.au/2013/12/15/poweredge-t110/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://etbe.coker.com.au/2013/12/15/poweredge-t110/_show\" style=\"display:none;\" onClick=\"show( 'http://etbe.coker.com.au/2013/12/15/poweredge-t110/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Dell PowerEdge T110

In June 2008 I received a Dell PowerEdge T105 server to run in my home for a client [1]. That system has run well for over 5 years for the purposes of my client and also as my own home fileserver and as a workstation. But now it’s getting a bit old, while it was still basically working the cooling fans were getting noisy, faster systems are available, and it was crashing occasionally which could have been due to hardware or software.

On the 7th of November I got a new Dell PowerEdge T110. It’s got a i3-3220 CPU (speed of 4218 according to cpubenchmark.net) which is a lot better than the AMD 1212 (speed of 982). It takes up to 4*3.5″ SATA disks (as opposed to 2 disks) and has more options for memory expansion. Next time I run out of disk space I’ll add another RAID-1 pair of disks instead of buying new disks.

Generally this system is much the same as the one it replaces. It’s a cheap server which unfortunately lacks sound hardware and usable video hardware. Sound is a problem I already solved with USB speakers but for the new system I bought a PCIe video card. Fortunately the system has PCIe*16 sockets (which apparently only have PCIe*8 wires) which avoids the problem I had in the past trying to obtain a suitable video card.

The crashes turned out to be due to BTRFS and now that I’ve made some tweaks everything is running well.

15 December, 2013 10:41AM by etbe

## Matthew Palmer <!-- document.write( "<a href=\"#\" id=\"http://www.hezmatt.org/~mpalmer/blog/2013/12/15/so-you-think-your-test-suite-is-comprehensive.html_hide\" onClick=\"exclude( 'http://www.hezmatt.org/~mpalmer/blog/2013/12/15/so-you-think-your-test-suite-is-comprehensive.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://www.hezmatt.org/~mpalmer/blog/2013/12/15/so-you-think-your-test-suite-is-comprehensive.html_show\" style=\"display:none;\" onClick=\"show( 'http://www.hezmatt.org/~mpalmer/blog/2013/12/15/so-you-think-your-test-suite-is-comprehensive.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### So you think your test suite is comprehensive?

Compare and contrast your practices with those of the SQLite development team, who go so far as to run every test with versions of malloc(2) and I/O syscalls which fail, as well as special VFS layers which reorder and drop writes.

I think this sentence sums it all up:

By comparison, the project has 1084 times as much test code and test scripts – 91452.5 KSLOC.

One thousand times as much test code as production code. As Q3A says, “Impressive”.

15 December, 2013 05:00AM by Matt Palmer (mpalmer@hezmatt.org)

## Hideki Yamane <!-- document.write( "<a href=\"#\" id=\"http://henrich-on-debian.blogspot.com/2013/12/my-desktop-environment-broken-any-help.html_hide\" onClick=\"exclude( 'http://henrich-on-debian.blogspot.com/2013/12/my-desktop-environment-broken-any-help.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://henrich-on-debian.blogspot.com/2013/12/my-desktop-environment-broken-any-help.html_show\" style=\"display:none;\" onClick=\"show( 'http://henrich-on-debian.blogspot.com/2013/12/my-desktop-environment-broken-any-help.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### my Desktop Environment broken - any help?

Recently, my GNOME environment was broken suddenly. Startup GNOME Shell cause segfault and die. I cannot use GDM3 at all.

henrich@hp:~ $grep gnome-shell /var/log/syslog Dec 15 10:51:12 hp kernel: [34944.969581] gnome-shell[10054]: segfault at 84 ip 00007f6c6186a7b9 sp 00007fff59f1dbc0 error 4 in libcogl.so.12.1.1[7f6c61821000+97000] Dec 15 10:51:12 hp gnome-session[9565]: WARNING: Application 'gnome-shell.desktop' killed by signal 11 Dec 15 10:51:14 hp kernel: [34946.429046] gnome-shell[10202]: segfault at 84 ip 00007ffa2ae877b9 sp 00007fff2bc5c7d0 error 4 in libcogl.so.12.1.1[7ffa2ae3e000+97000] Dec 15 10:51:14 hp gnome-session[9565]: WARNING: Application 'gnome-shell.desktop' killed by signal 11 Dec 15 10:51:14 hp gnome-session[9565]: WARNING: App 'gnome-shell.desktop' respawning too quickly So, I've switched to KDM, and install xfce4. However, I faced to another problem - it cannot show any icons (desktop and panel). Odd Chromium window (why red?). Also eog says "Could not load image" "Unrecognized image file format" - Well, why? It's just a .png file. (screenshot on KDE4) Shotwell cause segfault, too :-( (and some other GTK applications) I cannot point out broken component. Dear lazyweb, could you tell me how to fix this situation, please? 15 December, 2013 04:10AM by Hideki Yamane (noreply@blogger.com) # December 14, 2013 ## Dirk Eddelbuettel <!-- document.write( "<a href=\"#\" id=\"http://dirk.eddelbuettel.com/blog/2013/12/14#rcppde_0.1.2_hide\" onClick=\"exclude( 'http://dirk.eddelbuettel.com/blog/2013/12/14#rcppde_0.1.2' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://dirk.eddelbuettel.com/blog/2013/12/14#rcppde_0.1.2_show\" style=\"display:none;\" onClick=\"show( 'http://dirk.eddelbuettel.com/blog/2013/12/14#rcppde_0.1.2' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### RcppDE 0.1.2 A maintenance release (now at version 0.1.2) of my RcppDE package (previously described in these two posts) is now CRAN. More details about the package are available in the vignette also included in the RcppDE R package. Changes were minimal and driven mostly by some CRAN Policy changes which now prefer vignette sources files in (top-level) directory vignettes/ Courtesy of CRANberries, there is also a diffstat report for the most recent release. Current and previous releases are available here as well as on CRAN. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. ### random 0.2.2 A maintenance release of my random package for truly (hardware-based) random numbers (pulled from random.org) is now on CRAN. It's been a while since previous releases. The package is described in a detailed vignette as well as in a essay by Mads Haahr. Changes were minimal and driven mostly by some CRAN Policy changes which now prefer vignette sources files in (top-level) directory vignettes/ Courtesy of CRANberries, there is also a diffstat report for the most recent release. Current and previous releases are available here as well as on CRAN. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. ### gcbd 0.2.5 A maintenance release (now at version 0.2.5) of my gcbd package (described only in these two posts) is now CRAN. More details about the package are available in the paper which is also included in the gcbd R package. Changes were minimal and driven mostly by some CRAN Policy changes which now prefer vignette sources files in (top-level) directory vignettes/ Courtesy of CRANberries, there is also a diffstat report for the most recent release. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. ## Richard Hartmann <!-- document.write( "<a href=\"#\" id=\"http://richardhartmann.de/blog/posts/2013/12/14-SteamOS/_hide\" onClick=\"exclude( 'http://richardhartmann.de/blog/posts/2013/12/14-SteamOS/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://richardhartmann.de/blog/posts/2013/12/14-SteamOS/_show\" style=\"display:none;\" onClick=\"show( 'http://richardhartmann.de/blog/posts/2013/12/14-SteamOS/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### SteamOS So SteamOS has been released. While that's marginally interesting in and as of itself, there are two observations to be made: 1. Microsoft, Nintendo, and Sony will feel an impact. More functionality on cheaper hardware which can easily by upgraded; I bet quite a few managers are not happy, at the moment. 2. While the stand-alone Linux Steam client was initially targeted at Ubuntu, SteamOS is based on Debian Wheezy. ## Actual Linux (not Android) for the end user The first one means more Linux installations. In the living room. On a machine that children are really focussed on and will want to play with, quite literally. The next logical step is for people who play games to install SteamOS on their other machines; desktops, laptops, everywhere they want to game. This could really be the tipping point where the average adolescent computer enthusiast does not need to reboot into Linux to fool around with, but the other way round: To need to boot to Windows for a few select legacy applications which don't run on the FLOSS variant of Wine like Office or Photoshop. And once this momentum starts to shift, other software vendors will follow the money trail. Could 2014 finally be.... the year of Linux on the desktop...? ## Debian vs Debian-based The latter one is also really interesting... Obviously, I don't know why Valve decided to go down this road, but there are several reasons which come to mind: • No need for the extra bloat • They wanted to avoid the tie-in with another for-profit entity • Unhappiness with some technical decisions made by UbuntuCanonical • Lack of faith in the long-term governance of Ubuntu What we are left with is a major player entering the ring of Linux for end-users and choosing Debian over Ubuntu. Hopefully, improvements to the base system will be fed back upstream, enabling all Debian-based distributions to profit easily, not only Ubuntu-based ones. I am willing to bet that two years ago, SteamOS would have been based on Ubuntu, not Debian. Recently, there's been a lot of backlash over various decisions which Canonical forced onto Ubuntu and it will be interesting to see how this plays out in the long run... I will be interesting to see how much pain Linux Mint and Kubuntu will endure. ## Forecast ### Users All in all, we are looking at a massive influx of new users into the Debian ecosystem. How massive? 65 million registered users massive. 7 million concurrent users at once, 1.2 million users actively playing the top 100 games at the same time massive. This is huge. ### Contributors In time, a substantial part of that userbase will switch over one or more of their machines over to SteamOS. The tinkerers among them will realize they can install plain Debian and install Steam as a package. The hackers among those will start to improve upon their systems; and what better way to do that then to go upstream? If even a tiny fraction of users makes it this far, the count of actively involved contributors with Debian will skyrocket if we let them join. Raspbian and some other not-quite-ideal decisions come to mind. ### Vendors Commercial software vendors need to stay profitable. Thus, they are forced to support distributions which promise enough paying users. In the past, this meant mainly SuSE and Red Hat; they had commercial backers, went through certifications, etc. In the recent past, this also meant Ubuntu. All of a sudden, Debian stable has a potential market of tens of millions of average computer users and computer enthusiasts. A lot of whom will want to continue to use their OS of choice at work, as well. Oh boy... 14 December, 2013 11:57AM by Richard 'RichiH' Hartmann ## NOKUBI Takatsugu <!-- document.write( "<a href=\"#\" id=\"http://blog.daionet.gr.jp/knok-e/2013/12/14/the-1st-debian-mokumoku-meeting/_hide\" onClick=\"exclude( 'http://blog.daionet.gr.jp/knok-e/2013/12/14/the-1st-debian-mokumoku-meeting/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.daionet.gr.jp/knok-e/2013/12/14/the-1st-debian-mokumoku-meeting/_show\" style=\"display:none;\" onClick=\"show( 'http://blog.daionet.gr.jp/knok-e/2013/12/14/the-1st-debian-mokumoku-meeting/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### The 1st Debian “mokumoku” meeting I attended the 1st Debian “mokumoku” meeting. “mokumoku”(黙々) means “silently and concentrativeness” in Japan. There were about 6 people in a room. I tried to prepare release of Namazu and KAKASI. 14 December, 2013 07:49AM by knok ## Charles Plessy <!-- document.write( "<a href=\"#\" id=\"http://charles.plessy.org/Debian/debi%C3%A2neries/fatigue/_hide\" onClick=\"exclude( 'http://charles.plessy.org/Debian/debi%C3%A2neries/fatigue/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://charles.plessy.org/Debian/debi%C3%A2neries/fatigue/_show\" style=\"display:none;\" onClick=\"show( 'http://charles.plessy.org/Debian/debi%C3%A2neries/fatigue/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Tired This morning when preparing the update of a package, and I saw a PNG file in its documentation. Even before opening it, I felt old, tired, distressed and unable to escape the situation. Inspecting the file confirmed that the image was not hand made. The source file is missing. The proof: anther image in the same directory has the same style and a SVG source. To make things worse, there are no instructions on how to generate the PNG from the SVG. More and more in these cases, I give up and abandon the package. I lost time and energy make Upstream some requirements for which I personally have no concrete interest. SVG in addition to PNG is better, but PNG alone for the documentation of a Free software is Free enough for me. However, the points of view expressed on debian-devel give me the impression that it is not good enough for Debian, so I just give up… # December 13, 2013 ## Gerfried Fuchs <!-- document.write( "<a href=\"#\" id=\"http://rhonda.deb.at/blog/2013/12/13#dunkelbunt_hide\" onClick=\"exclude( 'http://rhonda.deb.at/blog/2013/12/13#dunkelbunt' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://rhonda.deb.at/blog/2013/12/13#dunkelbunt_show\" style=\"display:none;\" onClick=\"show( 'http://rhonda.deb.at/blog/2013/12/13#dunkelbunt' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### [dunkelbunt] Tuesday was a really nice evening. A few weeks ago I found a poster about the concert of [dunkelbunt], and got my ticket only on monday. I was told by the ticket sellers that they still have plenty left. In the end when I turned up at the event at tuesday though the concert hall was fully packed with people and I was told that it actually was sold out. There wasn't much place inside the hall left, so I mostly stood in the doorway to the bar area and enjoyed the music from there. If you listen to their songs you might get an idea why the music catched me and I started to let the music move my body, literally. It's a great feeling after a tough day, and there were some other nice people around which let the same happen to them so it did feel less awkward for me. Anyway, if you want to find out if their music can do the same to you, here are some songs to listen to: • The Chocolate Butterfly: This was actually the first song that got me interested in them which was playing on a local radio station. • Cinnamon Girl: One of the reasons why [dunkelbunt] is put into the electro swing genre. :) • Schlawiener: The title is a pun, a mix between "Schlawiner" (smooth operator) and "Wiener" (Viennese). Enjoy! /music | permanent link | Comments: 1 | Flattr this 13 December, 2013 11:17PM by Rhonda ## Keith Packard <!-- document.write( "<a href=\"#\" id=\"http://keithp.com/blogs/xserver-warnings/_hide\" onClick=\"exclude( 'http://keithp.com/blogs/xserver-warnings/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://keithp.com/blogs/xserver-warnings/_show\" style=\"display:none;\" onClick=\"show( 'http://keithp.com/blogs/xserver-warnings/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### xserver-warnings ## Cleaning up X server warnings So I was sitting in the Narita airport with a couple of other free software developers merging X server patches. One of the developers was looking over my shoulder while the X server was building and casually commented on the number of warnings generated by the compiler. I felt like I had invited someone into my house without cleaning for months — embarrassed and ashamed that we’d let the code devolve into this state. Of course, we’ve got excuses — the X server code base is one of the oldest pieces of regularly used free software in existence. It was started before ANSI-C was codified. No function prototypes, no ‘const’, no ‘void *’, no enums or stdint.h. There may be a few developers out there who remember those days (fondly, of course), but I think most of us are glad that our favorite systems language has gained a lot of compile-time checking in the last 25 years. We’ve spent time in the past adding function prototypes and cleaning up other warnings, but there was never a point at which the X server compiled without any warnings. More recently, we’ve added a pile of new warning flags when compiling the X server which only served to increase the number of warnings dramatically. ### The current situation With the master branch of the X server and released versions of the dependencies, we generate 1047 warnings in the default build. ### -Wcast-qual considered chatty The GCC flag, -Wcast-qual, complains when you cast a pointer and change the ‘const’ qualifier status. A very common thing for the X server to do is declare pointers as ‘const’ to mark them as immutable once assigned. Often, the relevant data is actually constructed once at startup in allocated memory and stored to the data structure. During server reset, that needs to be freed, but free doesn’t take a const pointer, so we cast to (void *), which -Wcast-qual then complains about. Loudly. Of the 1047 warnings, 380 of them are generated by this one warning flag. I’ve gone ahead and just disabled it in util/macros for now. ### String constants are a pain The X server uses string constants to initialize defaults for font paths, configuration options, font names along with a host of other things. These end up getting stored in variables that can also take allocated storage. I’ve gone ahead and declared the relevant objects as const and then fixed the code to suit. I don’t have a count of the number of warnings these changes fixed; they were scattered across dozens of X server directories, and I was fixing one directory at a time, but probably more than half of the remaining warnings were of this form. ### And a host of other warnings Fixing the rest of the warnings was mostly a matter of stepping through them one at a time and actually adjusting the code. Shadowed declarations, unused values, redundant declarations and missing printf attributes were probably the bulk of them though. ### Changes to external modules Instead of just hacking the X server code, I’ve created patches for other modules where necessary to fix the problems in the “right” place. • proto/fontsproto. Declares FontPathElement names as ‘const char *’ • mesa/drm. Adds ‘printf’ attribute to the debug_print function • util/macros. Removes -Wcast-qual from the default warning set. ### Getting the bits In case it wasn’t clear, the X server build now generates zero warnings on my machine. I’m hoping that this will also be true for other people. Patches are available at: xserver - git://people.freedesktop.org/~keithp/xserver warning-fixes fontsproto - git://people.freedesktop.org/~keithp/fontsproto fontsproto-next mesa/drm - git://people.freedesktop.org/~keithp/drm warning-fixes util/macros - already upstream on master ### Keeping our house clean Of course, these patches are all waiting until 1.15 ships so that we don’t accidentally break something important. However, once they’re merged, I’ll be bouncing any patches which generate warnings on my system, and if other people find warnings when they build, I’ll expect them to send patches as well. Now to go collect the tea cups in my office and get them washed along with the breakfast dishes so I won’t be embarrassed if some free software developers show up for lunch today. ## Daniel Kahn Gillmor <!-- document.write( "<a href=\"#\" id=\"http://debian-administration.org/users/dkg/weblog/105_hide\" onClick=\"exclude( 'http://debian-administration.org/users/dkg/weblog/105' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://debian-administration.org/users/dkg/weblog/105_show\" style=\"display:none;\" onClick=\"show( 'http://debian-administration.org/users/dkg/weblog/105' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### OpenPGP Key IDs are not useful ### Fingerprints and Key IDs OpenPGPv4 fingerprints are made from an SHA-1 digest over the key's public key material, creation date, and some boilerplate. SHA-1 digests are 160 bits in length. The "long key ID" of a key is the last 64 bits of the key's fingerprint. The "short key ID" of a key is the last 32 bits of the key's fingerprint. You can see both of the key IDs as a hash in and of themselves, as "32-bit truncated SHA-1" is a sort of hash (albeit not a cryptographically secure one). I'm arguing here that short Key IDs and long Key IDs are actually useless, and we should stop using them entirely where we can do so. We certainly should not be exposing normal human users to them. (Note that I am not arguing that OpenPGP v4 fingerprints themselves are cryptographically insecure. I do not believe that there are any serious cryptographic risks currently associated with OpenPGP v4 fingerprints. This post is about Key IDs specifically, not fingerprints.) ### Key IDs have serious problems Asheesh pointed out two years ago that OpenPGP short key IDs are bad because they are trivial to replicate. This is called a preimage attack against the short key ID (which is just a truncated fingerprint). Today, David Leon Gil demonstrated that a collision attack against the long key ID is also trivial. A collision attack differs from a preimage attack in that the attacker gets to generate two different things that both have the same digest. Collision attacks are easier than preimage attacks because of the birthday paradox. dlg's colliding keys are not a surprise, but hopefully the explicit demonstration can serve as a wakeup call to help us improve our infrastructure. So this is not a way to spoof a specific target's long key ID on its own. But it indicates that it's more of a worry than most people tend to think about or plan for. And remember that for a search space as small as 64-bits (the long key ID), if you want to find a pre-image against any one of 2k keys, your search is actually only in a (64-k)-bit space to find a single pre-image. The particularly bad news: gpg doesn't cope well with the two keys that have the same long key ID: 0 dkg@alice:~$ gpg --import x
gpg: key B8EBE1AF: public key "9E669861368BCA0BE42DAF7DDDA252EBB8EBE1AF" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
0 dkg@alice:~$gpg --import y gpg: key B8EBE1AF: doesn't match our copy gpg: Total number processed: 1 2 dkg@alice:~$
This probably also means that caff (from the signing-party package) will also choke when trying to deal with these two keys.

I'm sure there are other OpenPGP-related tools that will fail in the face of two keys with matching 64-bit key IDs.

### We should not use Key IDs

I am more convinced than ever that key IDs (both short and long) are actively problematic to real-world use of OpenPGP. We want two things from a key management framework: unforgability, and human-intelligible handles. Key IDs fail at both.
• Fingerprints are unforgable (as much as SHA-1's preimage resistance, anyway -- that's a separate discussion), but they aren't human-intelligible.
• User IDs are human-intelligible, and they are unforgable if we can rely on a robust keysigning network.
• Key IDs (both short and long) are neither human-intelligible nor unforgable (regardless of existence of a keysigning network), so they are the worst of all possible worlds.
So reasonable tools should not expose either short or long key IDs to users, or use them internally if they can avoid them. They do not have any properties we want, and in the worst case, they actively mislead people or lead them into harm. What reasonable tool should do that?

### How to replace Key IDs

If we're not going to use Key IDs, what should we do instead?

For anything human-facing, we should be using human-intelligible things like user IDs and creation dates. These are trivial to forge, but people can relate to them. This is better than offering the user something that is also trivial to forge, but that people cannot relate to. The job of any key management UI should be to interpret the cryptographic assurances provided by the certifications and present that to the user in a comprehensible way.

For anything not human-facing (e.g. key management data storage, etc), we should be using the full key itself. We'll also want to store the full fingerprint as an index, since that is used for communication and key exchange (e.g. on calling cards).

There remain parts of the spec (e.g. PK-ESK, Issuer subpackets) that make some use of the long key ID in ways that provide some measure of convenience but no real cryptographic security. We should fix the spec to stop using those, and either remove them entirely, or replace them with the full fingerprints. These fixes are not as urgent as the user-facing changes or the critical internal indexing fixes, though.

Key IDs are not useful. We should stop using them.

Tags: collision, crypto, gpg, openpgp, pgp, security

13 December, 2013 08:04PM by Daniel Kahn Gillmor (dkg)

## Tanguy Ortolo <!-- document.write( "<a href=\"#\" id=\"http://tanguy.ortolo.eu/blog/article119/pure-sensia_hide\" onClick=\"exclude( 'http://tanguy.ortolo.eu/blog/article119/pure-sensia' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://tanguy.ortolo.eu/blog/article119/pure-sensia_show\" style=\"display:none;\" onClick=\"show( 'http://tanguy.ortolo.eu/blog/article119/pure-sensia' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

Thanks to a corporate reward program, I just got a Pure Sensia digital and Internet radio receiver: basically, it is a device able to play streams from FM, DAB, HTTP and USB sticks. In overall, it works fine, and it has a remote controller, so it makes a nice addition to my home equipment, but it has what I consider a major flaw, which I suspect to have been designed on purpose.

For playing streams from FM or DAB, the process is rather simple: you select a frequency and it plays, nothing else is involved. I did not try USB yet but it should be similar: you select a file or a playlist and it plays it. But for HTTP streams, it is quite different: you select a stream from a the “Pure Connect” directory which is a list of HTTP streaming services maintained by the manufacturer Pure.

This raises three concerns:

1. If all HTTP stream access is made from that remote directory, it probably means Pure knows, and possibly logs, every stream you listen to. That is not acceptable.
2. What will happen when that service is shut down? Not if it is shut down, mind you, but when it is, because it will, since I never heard of any company keeping a service forever, or any company lasting forever itself actually. Well, here is what will happen: all these digital and Internet radio receivers will become digital but not Internet radio receiver. That is not acceptable: when you buy a radio receiver, you buy a device, not a service of indefinite term.
3. What do you do if you want to listen to an HTTP stream which is not listed on Pure's directory? Answer of Pure's support: you can add custom streams by URL to your Pure account's favourites. Well, good try, but that is not enough or rather, that is too much: requiring a Pure account to do that, is an artificial restriction, which suffers from exactly the same flaw as the Pure directory. And letting a single company know every Internet stream you listen to is not acceptable either.

Considering that flaw, here is my overall comment about that radio receiver: it is based on a good idea, and it has a good overall design, but it implements it in a precarious way. If you buy one of these things, you should know that you are not buying a complete digital and Internet radio receiver but only a digital radio receiver with some Internet features with privacy concerns, which will work for a time and one day stop working on Pure's decision.

13 December, 2013 01:16PM by Tanguy

## Sylvain Le Gall <!-- document.write( "<a href=\"#\" id=\"http://le-gall.net/sylvain+violaine/blog/index.php?post/2013/12/11/Release-of-OASIS-0.4.0_hide\" onClick=\"exclude( 'http://le-gall.net/sylvain+violaine/blog/index.php?post/2013/12/11/Release-of-OASIS-0.4.0' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://le-gall.net/sylvain+violaine/blog/index.php?post/2013/12/11/Release-of-OASIS-0.4.0_show\" style=\"display:none;\" onClick=\"show( 'http://le-gall.net/sylvain+violaine/blog/index.php?post/2013/12/11/Release-of-OASIS-0.4.0' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Release of OASIS 0.4.0

I am happy to announce that OASIS v0.4.0 has just been released.

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

I have recently resumed my work on OASIS and this will be hopefully the new version that will lead to quicker iteration in the development of OASIS. The development process was slowdown by the fact, that I feared introducing new fields in _oasis or regression. This was a pain and I decided to change my development model.

### Features

The most important step is the introduction of AlphaFeatures and BetaFeatures fields. They allow to introduce pieces of code that will only be activated if certain features are listed in those fields. It should help to be always ready to release.

The features also cover other aspect like flag_tests and flag_docs which has been introduced in OASIS v0.3.0. In fact the features API is now used to introduce all enhancement while keeping backward compatibility with regard to OASISFormat. Rather than defining a ~since_version:0.3 for fields we use a feature that handle the maturity level of the feature. When I feel a specific feature is ready to ship, I just change the InDev Alpha to InDev Beta and then SinceVersion 0.4. On the long term, when we won't support anymore a version of OASIS that existed before the SinceVersion, the feature will always be true and I will fully integrate it in the code.

The only constraint around features is: if you use AlphaFeatures or BetaFeatures field, you must use the latest OASISFormat...

Features section in the manual.

Example of features available:

• section_object: allow to create object (.cmo/.cmx) in _oasis
• pure_interface: an OCamlbuild feature that allows to handle .mli without a .ml file

### Automate

Another topic is automation of testing releases. For OASIS v0.3.0, I ran tests on all platforms manually, late in the development of v0.3.0 and it was painful to fix. So I have decided to setup a Jenkins instance that automate testing on Linux. On the long term, I plan to also setup a Mac OS X builder and start looking at Windows as well. This should help me catch errors early and be able to fix them quickly.

However, for v0.4.0 I have decided to just release what I have and which has mainly be tested on Linux. The point here is to quickly release and iterate, rather than wait for perfection. Hopefully end user testing will allow to quickly discover new bugs.

### Time boxed release

In the coming months, I will try to do time boxed releases. I will try to release version of OASIS every 15th of the month. The point here is to try to iterate faster and avoid long delay between release.

See you in 1 month for the next release.

13 December, 2013 02:14AM by gildor

## Jonathan McCrohan <!-- document.write( "<a href=\"#\" id=\"http://dereenigne.org/linux/linux-kernel-contributor_hide\" onClick=\"exclude( 'http://dereenigne.org/linux/linux-kernel-contributor' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://dereenigne.org/linux/linux-kernel-contributor_show\" style=\"display:none;\" onClick=\"show( 'http://dereenigne.org/linux/linux-kernel-contributor' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Linux Kernel Contributor

Having used GNU/Linux systems for some time now, and having submitted patches to a fair number of open source projects, it is nice to finally get a patch accepted into the biggest open source project of them all, the Linux kernel. While I did submit a kernel patch to OpenWrt back in 2011, it is maintained as a rebased patchset, and was never upstreamed to Linus' tree.

That changed today though, when a small patch I (had forgotten I had) sent to the linux-media mailinglist back in October 2013, was just pulled by Linus Torvalds into his tree for the Linux 3.13-rc4 release; so I'm now proud to be able to call myself a contributor to the Linux Kernel.

13 December, 2013 01:22AM by jmccrohan

## Daniel Pocock <!-- document.write( "<a href=\"#\" id=\"http://danielpocock.com/xwiki-ten-years-and-webrtc_hide\" onClick=\"exclude( 'http://danielpocock.com/xwiki-ten-years-and-webrtc' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://danielpocock.com/xwiki-ten-years-and-webrtc_show\" style=\"display:none;\" onClick=\"show( 'http://danielpocock.com/xwiki-ten-years-and-webrtc' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### xWiki: 10 years and a WebRTC success story

Six months ago, I wrote to the leaders of several open source web frameworks and asked them about their vision for WebRTC and if they would come to the WebRTC Conference in Paris this week (now finished). The most promising response was from Ludovic Dubost, founder of the xWiki project.

Ludovic successfully demonstrated their shiny new WebRTC capabilities today in front of an audience including many far more experienced telephony operators who are still only getting to terms with this technology.

### What is xWiki?

Don't let the wiki name limit your perception of this project. xWiki is a lot more than just another wiki hosting framework. As a bare minimum, you can certainly use it in the same way as other wikis, doing lightweight markup that is easier than HTML. On the other hand, xWiki really shines when it comes to extensibility. The xWiki team are Java developers and so xWiki appeals most to other Java developers who may want to leverage some library code from their web portal from time to time, without even having to compile anything. Here are some examples and here is one of the most trivial ones:

{{velocity}}
Your username is $xcontext.getUser(), welcome to the site. {{/velocity}} ### WebRTC capabilities The xWiki team chose XMPP as a chat protocol (using the Candy XMPP JavaScript chat) as the foundation for real-time communication. They have then extended this by creating a custom signalling mechanism and making it convenient for users of a chat session to upgrade the session to voice/video with a mouseclick. The whole experience works within the browser without any plugins. ### 10 years of xWiki It was also xWiki's 10th birthday today and this provided the perfect opportunity for a party: ### cjdns and enigmabox While at the xWiki office, it was interesting to see a lot of innovative work taking place, including this VoIP setup where a Grandstream phone is attached to an Enigmabox operated by Caleb from the cjdns project. 13 December, 2013 12:09AM by Daniel.Pocock # December 12, 2013 ## Christine Spang <!-- document.write( "<a href=\"#\" id=\"http://blog.spang.cc/posts/Donate_to_OpenHatch/_hide\" onClick=\"exclude( 'http://blog.spang.cc/posts/Donate_to_OpenHatch/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.spang.cc/posts/Donate_to_OpenHatch/_show\" style=\"display:none;\" onClick=\"show( 'http://blog.spang.cc/posts/Donate_to_OpenHatch/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Donate to OpenHatch I just donated$500 to OpenHatch. Here's why you should donate too:

1. Diversity in open source matters. We can't keep making the software the world runs on without involving people of all sorts, from all backgrounds.
2. OpenHatch is run by community members who I've known for years and trust. They care about data-driven effectiveness and are always getting better at what they do.
3. A rising tide floats all boats. More contributors == more awesome.
4. If you donate before December 24th, your donation makes twice the difference.

Diversity and education initiatives are the reason I'm a part of the free and open source software community today. (Thanks, Debian Women.)

You don't have to donate $500 to make a difference.$5, $10,$25— from a hundred people—all adds up.

Please join me in supporting OpenHatch today.

# December 11, 2013

## C.J. Adams-Collier <!-- document.write( "<a href=\"#\" id=\"http://wp.colliertech.org/cj/?p=1232_hide\" onClick=\"exclude( 'http://wp.colliertech.org/cj/?p=1232' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://wp.colliertech.org/cj/?p=1232_show\" style=\"display:none;\" onClick=\"show( 'http://wp.colliertech.org/cj/?p=1232' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### I miss you. Please come back?

...
Checking supported features...
Installing system database...
- SSL connections supported
Collecting tests...
Using server port 42388

==============================================================================

TEST                                      RESULT   TIME (ms) or COMMENT
--------------------------------------------------------------------------

worker[1] Using MTR_BUILD_THREAD 300, with reserved ports 16000..16019
oqgraph.basic                            [ skipped ]  No OQGraph
oqgraph.binlog                           [ skipped ]  No OQGraph
sphinx.sphinx                            [ skipped ]  No Sphinx
archive.archive-big                      [ skipped ]  Test needs --big-test
binlog.binlog_multi_engine               [ skipped ]  ndbcluster disabled
binlog.binlog_spurious_ddl_errors        [ disabled ]  BUG#11761680 2013-01-18 astha Fixed on mysql-5.6 and trunk
binlog.binlog_truncate_innodb            [ disabled ]  BUG#11764459 2010-10-20 anitha Originally disabled due to BUG#42643. Product bug fixed, but test changes needed
federated.federated_server               [ skipped ]  Test needs --big-test
...

11 December, 2013 07:01PM by C.J. Adams-Collier

## Daniel Pocock <!-- document.write( "<a href=\"#\" id=\"http://danielpocock.com/get-webrtc-going-faster_hide\" onClick=\"exclude( 'http://danielpocock.com/get-webrtc-going-faster' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://danielpocock.com/get-webrtc-going-faster_show\" style=\"display:none;\" onClick=\"show( 'http://danielpocock.com/get-webrtc-going-faster' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Get WebRTC going faster

On Saturday, Lumicall began offering free calls from browser to mobile using the free and open WebRTC technology. It should be no surprise that the service has been popular.

### Is it really free and open?

The only way to prove this technology is free is to help people implement this for themself.

On Monday, I uploaded reSIProcate v1.9.0 beta7 packages to Debian. The reSIProcate SIP proxy, repro, is one of the core components of the solution behind the free Lumicall service.

Simply install the repro and resiprocate-turn-server packages using apt-get and make the following changes to the configuration (use your own IP addresses of course). I've taken this diff from my own runtime environment, only hiding my passwords, so that you can see exactly how I got it working:

--- repro.config.orig	2013-12-11 17:36:27.179228324 +0100
+++ repro-ws.sip5060.net.config	2013-12-11 17:48:24.159938649 +0100
@@ -143,6 +143,41 @@
# Transport6TlsClientVerification = None
# Transport6RecordRouteUri = sip:h1.sipdomain.com;transport=WS

+Transport1Interface = 195.8.117.57:80
+Transport1Type = WS
+Transport1RecordRouteUri = auto
+
+Transport2Interface = 2001:67c:1388:1000::57:80
+Transport2Type = WS
+Transport2RecordRouteUri = auto
+
+Transport3Interface = 195.8.117.57:5060
+Transport3Type = TCP
+Transport3RecordRouteUri = auto
+
+Transport4Interface = 2001:67c:1388:1000::57:5060
+Transport4Type = TCP
+Transport4RecordRouteUri = auto
+
+Transport5Interface = 195.8.117.57:443
+Transport5Type = WSS
+#Transport5RecordRouteUri = auto
+Transport5TlsDomain = ws.sip5060.net
+Transport5TlsClientVerification = None
+Transport5RecordRouteUri = sip:ws.sip5060.net;transport=WSS
+Transport5TlsCertificate = /etc/ssl/ssl.crt/ws.sip5060.net-bundle.crt
+Transport5TlsPrivateKey = /etc/ssl/private/ws.sip5060.net-key.pem
+
+Transport6Interface = 2001:67c:1388:1000::57:443
+Transport6Type = WSS
+#Transport6RecordRouteUri = auto
+Transport6TlsDomain = ws.sip5060.net
+Transport6TlsClientVerification = None
+Transport6RecordRouteUri = sip:ws.sip5060.net;transport=WSS
+Transport6TlsCertificate = /etc/ssl/ssl.crt/ws.sip5060.net-bundle.crt
+Transport6TlsPrivateKey = /etc/ssl/private/ws.sip5060.net-key.pem
+
+
# Comma separated list of DNS servers, overrides default OS detected list (leave blank
# for default)
DNSServers =
@@ -455,7 +490,7 @@
ForceRecordRouting = false

# Assume path option
-AssumePath = false
+AssumePath = true

# Disable registrar
DisableRegistrar = false
@@ -481,7 +516,7 @@
# WARNING: Before enabling this, ensure you have a RecordRouteUri setup, or are using
# the alternate transport specification mechanism and defining a RecordRouteUri per
# transport: TransportXRecordRouteUri
-DisableOutbound = true
+DisableOutbound = false

# Set the draft version of outbound to support (default: RFC5626)
# Other accepted values are the versions of the IETF drafts, before RFC5626 was issued
@@ -505,7 +540,7 @@
# WARNING: Before enabling this, ensure you have a RecordRouteUri setup, or are using
# the alternate transport specification mechanism and defining a RecordRouteUri per
# transport: TransportXRecordRouteUri
-EnableFlowTokens = false
+EnableFlowTokens = true

# Enable use of flow-tokens in non-outbound cases for clients detected to be behind a NAT.
# This a more selective flow token hack mode for clients not supporting RFC5626.  The

This is a diff against the /etc/repro/repro.config file distributed in the Debian package version 1.9.0~beta7-1.

In the example above, I've included WSS transport defintions for WebSockets over TLS. Use the standard procedure for creating webserver SSL certificates to create certificates for repro and make sure you insert the correct filenames in the TLS parameters above. I've also duplicated every transport for IPv6. If you don't want TLS/WSS or IPv6, just comment those entries out (and renumber the remaining transports).

### Web-based SIP proxy setup

Once you have repro running, go to the web admin interface (port 5080, username: admin, password: admin) and finish the setup using the web UI. The following steps are essential:

• Add any routes to external services (optional - in my next blog I'll demonstrate how to route WebRTC calls to Asterisk using the Debian packages and less than 20 lines of configuration)

### Set up reTurn, the TURN server

test:notasecret:reTurn:authorized

IMPORTANT: the realm in the users file (reTurn in the example and default config) must be identical to the AuthenticationRealm in the /etc/reTurnServer.config file.

### On your own web site

Simply install your own apache server and clone the webrtc.lumicall.org demo site. Modify the file js/custom.js and include the settings for your own server.

# cd /var/www
# mkdir webcall
# cd webcall
# wget -r -nH http://webrtc.lumicall.org
# vi js/custom.js

In the custom.js, make sure you use a ws:// URL if you didn't set up SSL certificates and use a wss:// URL if you did. The IP or domain of your repro server must be in the ws:// or wss:// URL.

Now navigate to the URL ending with /webcall on your server.

### For RHEL, Fedora and other RPM users

Can somebody please assist with the review of the cajun-jsonapi dependency package so I can upload this new version of reSIProcate to Fedora? I'm also planning to make v1.9.0 available in EPEL6 when it is released in January.

### Questions?

11 December, 2013 05:25PM by Daniel.Pocock

## Steve Kemp <!-- document.write( "<a href=\"#\" id=\"http://blog.steve.org.uk/it_s_a_wonderful_life.html_hide\" onClick=\"exclude( 'http://blog.steve.org.uk/it_s_a_wonderful_life.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.steve.org.uk/it_s_a_wonderful_life.html_show\" style=\"display:none;\" onClick=\"show( 'http://blog.steve.org.uk/it_s_a_wonderful_life.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### It's a wonderful life

Today, here in the UK, the date is 11/12/13.

Today, here in Edinburgh, I we became married.

I've already promised I will make no more than two jokes, ever, about "owning a wife". I will save them for suitable occasions.

## Gustavo Noronha Silva <!-- document.write( "<a href=\"#\" id=\"http://blog.kov.eti.br/2013/12/webkitgtk-hackfest-5-0-2013/_hide\" onClick=\"exclude( 'http://blog.kov.eti.br/2013/12/webkitgtk-hackfest-5-0-2013/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.kov.eti.br/2013/12/webkitgtk-hackfest-5-0-2013/_show\" style=\"display:none;\" onClick=\"show( 'http://blog.kov.eti.br/2013/12/webkitgtk-hackfest-5-0-2013/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### WebKitGTK+ hackfest 5.0 (2013)!

For the fifth year in a row the fearless WebKitGTK+ hackers have gathered in A Coruña to bring GNOME and the web closer. Igalia has organized and hosted it as usual, welcoming a record 30 people to its office. The GNOME foundation has sponsored my trip allowing me to fly the cool 18 seats propeller airplane from Lisbon to A Coruña, which is a nice adventure, and have pulpo a feira for dinner, which I simply love! That in addition to enjoying the company of so many great hackers.

Web with wider tabs and the new prefs dialog

The goals for the hackfest have been ambitious, as usual, but we made good headway on them. Web the browser (AKA Epiphany) has seen a ton of little improvements, with Carlos splitting the shell search provider to a separate binary, which allowed us to remove some hacks from the session management code from the browser. It also makes testing changes to Web more convenient again. Jon McCan has been pounding at Web’s UI making it more sleek, with tabs that expand to make better use of available horizontal space in the tab bar, new dialogs for preferences, cookies and password handling. I have made my tiny contribution by making it not keep tabs that were created just for what turned out to be a download around. For this last day of hackfest I plan to also fix an issue with text encoding detection and help track down a hang that happens upon page load.

Martin Robinson and Dan Winship hack

Martin Robinson and myself have as usual dived into the more disgusting and wide-reaching maintainership tasks that we have lots of trouble pushing forward on our day-to-day lives. Porting our build system to CMake has been one of these long-term goals, not because we love CMake (we don’t) or because we hate autotools (we do), but because it should make people’s lives easier when adding new files to the build, and should also make our build less hacky and quicker – it is sad to see how slow our build can be when compared to something like Chromium, and we think a big part of the problem lies on how complex and dumb autotools and make can be. We have picked up a few of our old branches, brought them up-to-date and landed, which now lets us build the main WebKit2GTK+ library through cmake in trunk. This is an important first step, but there’s plenty to do.

Hackers take advantage of the icecream network for faster builds

Under the hood, Dan Winship has been pushing HTTP2 support for libsoup forward, with a dead-tree version of the spec by his side. He is refactoring libsoup internals to accomodate the new code paths. Still on the HTTP front, I have been updating soup’s MIME type sniffing support to match the newest living specification, which includes specification for several new types and a new security feature introduced by Internet Explorer and later adopted by other browsers. The huge task of preparing the ground for a one process per tab (or other kinds of process separation, this will still be topic for discussion for a while) has been pushed forward by several hackers, with Carlos Garcia and Andy Wingo leading the charge.

Jon and Guillaume battling code

Other than that I have been putting in some more work on improving the integration of the new Web Inspector with WebKitGTK+. Carlos has reviewed the patch to allow attaching the inspector to the right side of the window, but we have decided to split it in two, one providing the functionality and one the API that will allow browsers to customize how that is done. There’s a lot of work to be done here, I plan to land at least this first patch durign the hackfest. I have also fought one more battle in the never-ending User-Agent sniffing war, in which we cannot win, it looks like.

Hackers chillin’ at A Coruña

I am very happy to be here for the fifth year in a row, and I hope we will be meeting here for many more years to come! Thanks a lot to Igalia for sponsoring and hosting the hackfest, and to the GNOME foundation for making it possible for me to attend! See you in 2014!

## Rogério Brito <!-- document.write( "<a href=\"#\" id=\"http://cynic.cc/blog//posts/2013-12-11-trivial_fact_convexity_of_polyhedra/_hide\" onClick=\"exclude( 'http://cynic.cc/blog//posts/2013-12-11-trivial_fact_convexity_of_polyhedra/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://cynic.cc/blog//posts/2013-12-11-trivial_fact_convexity_of_polyhedra/_show\" style=\"display:none;\" onClick=\"show( 'http://cynic.cc/blog//posts/2013-12-11-trivial_fact_convexity_of_polyhedra/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Trivial fact: convexity of polyhedra

Just a trivial fact: every polyhedron that is used in linear programming is convex, that is !!mathjaxbegin-i!! QXggXGxlcSBi !!mathjaxend-i!! is convex, for a matrix !!mathjaxbegin-i!! QQ== !!mathjaxend-i!! and a (column) vector !!mathjaxbegin-i!! Yg== !!mathjaxend-i!!.

Proof: Take any !!mathjaxbegin-i!! eCcsIHgnJw== !!mathjaxend-i!! that satisfy the system of inequalities !!mathjaxbegin-i!! QXggXGxlcSBi !!mathjaxend-i!!. Then, for !!mathjaxbegin-i!! MCBcbGVxIFxsYW1iZGEgXGxlcSAx !!mathjaxend-i!!, we have that !!mathjaxbegin-i!! XGxhbWJkYSBBeCcgXGxlcSBcbGFtYmRhIGI= !!mathjaxend-i!!, that is !!mathjaxbegin-i!! QSBcbGFtYmRhIHgnIFxsZXEgXGxhbWJkYSBi !!mathjaxend-i!!. Similarly, for !!mathjaxbegin-i!! eCcn !!mathjaxend-i!!, we have that !!mathjaxbegin-i!! QSAoMS1cbGFtYmRhKSB4JyBcbGVxICgxLVxsYW1iZGEpIGI= !!mathjaxend-i!!. Summing the inequalities, we get: !!mathjaxbegin-d!! CiBBW1xsYW1iZGEgeCcgKyAoMS1cbGFtYmRhKSB4JyddIFxsZXEgW1xsYW1iZGEgKyAoMS1cbGFt YmRhKV0gYiA9IGIsCg== !!mathjaxend-d!! which means that !!mathjaxbegin-i!! XGhhdHt4fSA9IFxsYW1iZGEgeCcgKyAoMS1cbGFtYmRhKSB4Jyc= !!mathjaxend-i!! is again a solution of the original set of inequalities, thus concluding the argument.

# December 10, 2013

## Kees Cook <!-- document.write( "<a href=\"#\" id=\"http://www.outflux.net/blog/archives/2013/12/10/live-patching-the-kernel/_hide\" onClick=\"exclude( 'http://www.outflux.net/blog/archives/2013/12/10/live-patching-the-kernel/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://www.outflux.net/blog/archives/2013/12/10/live-patching-the-kernel/_show\" style=\"display:none;\" onClick=\"show( 'http://www.outflux.net/blog/archives/2013/12/10/live-patching-the-kernel/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### live patching the kernel

A nice set of recent posts have done a great job detailing the remaining ways that a root user can get at kernel memory. Part of this is driven by the ideas behind UEFI Secure Boot, but they come from the same goal: making sure that the root user cannot directly subvert the running kernel. My perspective on this is toward making sure that an attacker who has gained access and then gained root privileges can’t continue to elevate their access and install invisible kernel rootkits.

An outline for possible attack vectors is spelled out by Matthew Gerrett’s continuing “useful kernel lockdown” patch series. The set of attacks was examined by Tyler Borland in “Bypassing modules_disabled security”. His post describes each vector in detail, and he ultimately chooses MSR writing as the way to write kernel memory (and shows an example of how to re-enable module loading). One thing not mentioned is that many distros have MSR access as a module, and it’s rarely loaded. If modules_disabled is already set, an attacker won’t be able to load the MSR module to begin with. However, the other general-purpose vector, kexec, is still available. To prove out this method, Matthew wrote a proof-of-concept for changing kernel memory via kexec.

Chrome OS is several steps ahead here, since it has hibernation disabled, MSR writing disabled, kexec disabled, modules verified, root filesystem read-only and verified, kernel verified, and firmware verified. But since not all my machines are Chrome OS, I wanted to look at some additional protections against kexec on general-purpose distro kernels that have CONFIG_KEXEC enabled, especially those without UEFI Secure Boot and Matthew’s lockdown patch series.

My goal was to disable kexec without needing to rebuild my entire kernel. For future kernels, I have proposed adding /proc/sys/kernel/kexec_disabled, a partner to the existing modules_disabled, that will one-way toggle kexec off. For existing kernels, things got more ugly.

What options do I have for patching a running kernel?

First I looked back at what I’d done in the past with fixing vulnerabilities with systemtap. This ends up being a rather heavy-duty way to go about things, since you need all the distro kernel debug symbols, etc. It does work, but has a significant problem: since it uses kprobes, a root user can just turn off the probes, reverting the changes. So that’s not going to work.

Next I looked at ksplice. The original upstream has gone away, but there is still some work being done by Jiri Slaby. However, even with his updates which fixed various build problems, there were still more, even when building a 3.2 kernel (Ubuntu 12.04 LTS). So that’s out too, which is too bad, since ksplice does exactly what I want: modifies the running kernel’s functions via a module.

So, finally, I decided to just do it by hand, and wrote a friendly kernel rootkit. Instead of dealing with flipping page table permissions on the normally-unwritable kernel code memory, I borrowed from PaX’s KERNEXEC feature, and just turn off write protect checking on the CPU briefly to make the changes. The return values for functions on x86_64 are stored in RAX, so I just need to stuff the kexec_load syscall with “mov -1, %rax; ret” (-1 is EPERM):

#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt

#include <linux/init.h>
#include <linux/module.h>
#include <linux/slab.h>

static unsigned long long_target;
static char *target;
module_param_named(syscall, long_target, ulong, 0644);

/* mov $-1, %rax; ret */ unsigned const char bytes[] = { 0x48, 0xc7, 0xc0, 0xff, 0xff, 0xff, 0xff, 0xc3 }; unsigned char *orig; /* Borrowed from PaX KERNEXEC */ static inline void disable_wp(void) { unsigned long cr0; preempt_disable(); barrier(); cr0 = read_cr0(); cr0 &= ~X86_CR0_WP; write_cr0(cr0); } static inline void enable_wp(void) { unsigned long cr0; cr0 = read_cr0(); cr0 |= X86_CR0_WP; write_cr0(cr0); barrier(); preempt_enable_no_resched(); } static int __init syscall_eperm_init(void) { int i; target = (char *)long_target; if (target == NULL) return -EINVAL; /* save original */ orig = kmalloc(sizeof(bytes), GFP_KERNEL); if (!orig) return -ENOMEM; for (i = 0; i < sizeof(bytes); i++) { orig[i] = target[i]; } pr_info("writing %lu bytes at %p\n", sizeof(bytes), target); disable_wp(); for (i = 0; i < sizeof(bytes); i++) { target[i] = bytes[i]; } enable_wp(); return 0; } module_init(syscall_eperm_init); static void __exit syscall_eperm_exit(void) { int i; pr_info("restoring %lu bytes at %p\n", sizeof(bytes), target); disable_wp(); for (i = 0; i < sizeof(bytes); i++) { target[i] = orig[i]; } enable_wp(); kfree(orig); } module_exit(syscall_eperm_exit); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Kees Cook <kees@outflux.net>"); MODULE_DESCRIPTION("makes target syscall always return EPERM"); If I didn’t want to leave an obvious indication that the kernel had been manipulated, the module could be changed to: • not announce what it’s doing • remove the exit route to not restore the changes on module unload • error out at the end of the init function instead of staying resident And with this in place, it’s just a matter of loading it with the address of sys_kexec_load (found via /proc/kallsyms) before I disable module loading via modprobe. Here’s my upstart script: # modules-disable - disable modules after rc scripts are done # description "disable loading modules" start on stopped module-init-tools and stopped rc task script cd /root/modules/syscall_eperm make clean make insmod ./syscall_eperm.ko \ syscall=0x$(egrep ' T sys_kexec_load\$' /proc/kallsyms | cut -d" " -f1)
modprobe disable
end script

And now I’m safe from kexec before I have a kernel that contains /proc/sys/kernel/kexec_disabled.

10 December, 2013 11:40PM by kees

## Dimitri John Ledkov <!-- document.write( "<a href=\"#\" id=\"http://blog.surgut.co.uk/2013/12/hi-my-name-is-what-my-name-is-who-my.html_hide\" onClick=\"exclude( 'http://blog.surgut.co.uk/2013/12/hi-my-name-is-what-my-name-is-who-my.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.surgut.co.uk/2013/12/hi-my-name-is-what-my-name-is-who-my.html_show\" style=\"display:none;\" onClick=\"show( 'http://blog.surgut.co.uk/2013/12/hi-my-name-is-what-my-name-is-who-my.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Hi! My name is... (what?) My name is... (who?) My name is... Slim Shady

On the 21st of November I have become a British citizen, and on the 9th of December I have changed my name by signing a statutory declaration with the following clauses:
I absolutely and entirely renounce, relinquish and abandon the use of my former name of Dmitrijs Ļedkovs and assume, adopt and determine to take and use from the date hereof the new name of Dimitri John Ledkov in substitution for my former name of Dmitrijs Ļedkovs.
I shall at all times hereafter, in all records, deeds, documents and other writings and in all actions and proceedings, as well as in all dealings and transactions and on all occasions whatsoever, use and subscribe my new name of Dimitri John Ledkov in substitution for my former name of Dmitrijs Ļedkovs so relinquished to the intent that I may hereafter be called, known and identified by the new name of Dimitri John Ledkov and not by my former name of Dmitrijs Ļedkovs.
I authorise and require all persons, at all times, to identify, describe and address me by my new name of Dimitri John Ledkov. I make this solemn declaration conscientiously, believing the same to be true and by virtue of the provisions of the Statutory Declarations Act 1835.
Now I need to get new passport, IDs and change my name pretty much everywhere. I have started the ball rolling and hopefully I'll be known by my new name everywhere soon enough.

Regards,

Dimitri.

10 December, 2013 04:55PM by Dimitri John Ledkov (noreply@blogger.com)

### another two days of paid work on Debian

Last year I told you that I spent two full-time days working on Debian as a part of initiative sponsored by my current employer.

This year I’ve devoted these two days again for Debian.

Quick summary of what I was able to do during these two days.

11 bugs, 4 lintian errors and 43 warnings were fixed. In addition 3 packages now use new source format (usually that means repackaging software from scratch). 4 uses new copyright format and the newest Standards-Version. 5 packages were updated to the newest upstream version.

Changelog entries:

potrace (1.11-1) unstable; urgency=low

* The Akamai Technologies paid volunteer days release.
* New upstream version.
* Completely repackaged from scratch (funny experience as usual):
- uses debhelper compatibility level 9 w/hardening options
- fixes 11 lintian warnings and 2 errors
* Fixes typo in manpage. (Closes: #694492)

-- Bartosz Fenski Mon, 9 Dec 2013 11:23:32 +0100

makeself (2.2.0-1) unstable; urgency=low

* The Akamai Technologies paid volunteer days release.
* New upstream release. (Closes: #690105)
- handles df output in more portable way (Closes: #641804)
* Repackaged from scratch.
- uses new packaging format 3.0 (Closes: #670738)
- uses debhelper compatibility level 9
- fixes 2 lintian errors and 6 warnings

-- Bartosz Fenski Mon, 09 Dec 2013 17:32:45 +0100

dibbler (1.0.0~rc1-1) unstable; urgency=low

* The Akamai Technologies paid volunteer days release.
* New upstream release candidate 1 version (Closes: #686539)
- doesn't drop dhcp session during pppd restarts (Closes: #641237)
- doesn't hang indefinitely on 'stop' (Closes: #675272)
* Calls dh --with autotools_dev to prevent build failures (Closes: 727356)
* Includes Japanese debconf translation (Closes: #718921)
* Updated Standards-Version (no changes needed)
* Uses debhelper compatibility level 9 w/hardening options
* init scripts now source init functions
* Fixes 20 lintian warnings.

-- Bartosz Fenski Tue, 10 Dec 2013 10:05:56 +0100

calcurse (3.1.4-1) unstable; urgency=low

* The Akamai Technologies paid volunteer days release.
* New upstream version.
* ACK previous NMU - thanks!
* Bumped Standards-Version (no changes needed)

-- Bartosz Fenski Tue, 10 Dec 2013 12:22:06 +0100

ipcalc (0.41-4) unstable; urgency=low

* The Akamai Technologies paid volunteer days release.
* Fixes overzealous input checking (Closes: #332468)
* Martin F Krafft removed as co-maintainer (Closes: #719098)
+ package now uses new copyright format
* Bumped Standards-Version (no changes needed)

-- Bartosz Fenski Tue, 10 Dec 2013 13:05:15 +0100

msort (8.53-1) unstable; urgency=low

* The Akamai Technologies paid volunteer days release.
* New upstream release.
* ACK previous NMUs, thanks!
* Switched to new source format.
* Switched to new copyright format.
* Moved patches to new quilt format and described them.
* Bumped Standards-Version (no changes needed)
* Has correct tcl/tk dependencies (Closes: #545135)
* Doesn't segfault with certain input (Closes: #630485)

-- Bartosz Fenski Tue, 10 Dec 2013 16:39:44 +0100

Thank you Akamai

10 December, 2013 02:48PM by fEnIo

## Russ Allbery <!-- document.write( "<a href=\"#\" id=\"http://www.eyrie.org/~eagle/journal/2013-12/002.html_hide\" onClick=\"exclude( 'http://www.eyrie.org/~eagle/journal/2013-12/002.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://www.eyrie.org/~eagle/journal/2013-12/002.html_show\" style=\"display:none;\" onClick=\"show( 'http://www.eyrie.org/~eagle/journal/2013-12/002.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### krb5-sync 3.0

krb5-sync is the software we run at Stanford to synchronize principal information from a central Heimdal realm to Active Directory, allowing users to use either a Linux-based Kerberos environment or Active Directory with the same account and password.

The original intent of this release was to add a new feature to allow a subsidiary instance of an account in the MIT or Heimdal realm to be synchronized with the instance-less account in Active Directory. This allows, for example, an rra/windows instance to be used to set and maintain the password for an rra principal in Active Directory.

In the process of implementing that, though, I ended up doing a significant overall of the code, since the plugin architecture was quite awkward and dated. The code now uses the MIT Kerberos data structures in a more natural and native way, since MIT Kerberos has now added direct support for plugins of this sort. Kerberos contexts and Kerberos error codes are used uniformly throughout the plugin, which provides consistent and more robust error handling and reporting. I also significantly enhanced the test suite, although it still needs more work to test the core functionality that has complex external dependencies. This release also drops support for all versions of MIT Kerberos prior to 1.9, which required an external patch; to run krb5-sync 3.0, you should upgrade to a recent version of MIT Kerberos. This allowed me to drop support for the legacy API.

There are a couple of major backward-incompatible changes in this release (and both unfortunately are not handled automatically by the Debian package upgrade, since it's hard to find and safely modify KDC configuration). First, the ad_ldap_base configuration option is now mandatory when synchronizing account status and its meaning has changed. Previously, dc elements for the realm were appended to a provided partial base. Now, the complete DN of the root of the Active Directory tree should be provided. This is more flexible and more useful with a wider variety of Active Directory setups.

Second, I took advantage of the backward-incompatibilities to change the module name to sync.so from krb5_sync.so, since the latter sounded weirdly redundant and verbose when installed in the Kerberos plugin directory. This will require a configuration change to the plugin configuration for the KDC or kadmin server.

Also in this release are a couple of new options: ad_queue_only, which forces all changes to be queued for later processing instead of processed in real time, and syslog, which can be used to turn off the internal syslog logging of non-errors from the module. (This is mostly useful for test suites.)

Now, password changes are queued on any Active Directory failure, not just a few oddly-distinguished ones. The previous behavior was rather specific to Stanford's needs, and queuing all password changes shouldn't pose any problems.

Finally, the krb5-sync-backend utility program for manipulating the queued changes has been completely rewritten and is much cleaner. It now uses the Net::Remctl::Backend Perl module for command and option handling, so that module (provided with remctl 3.4 or later) must be installed. It also requires IPC::Run, which is available from CPAN. It uniformly supports a -d option to specify the queue location, and skips event files during processing that no longer exist.

You can get the latest release from krb5-sync distribution page.

### rra-c-util 4.12

This release of my collection of shared C, Perl, and Autoconf code fixes a bug in all the Autoconf macros that use the lib-helper framework for optional use of libraries. The --with flag without a path would result in yes/include and yes/lib to be added to the compiler and linker paths. It also adds Autoconf probles for Cyrus SASL libraries, contributed by Julien ÉLIE based on the INN macros.

This release also adds support for KADM5_MISSING_KRB5_CONF_PARAMS to portable/kadmin.h and the test_tmpdir function to Test::RRA::Automake. The latter works the same as it does in the C and shell TAP libraries.

Finally, the shared valgrind suppression file adds a suppression for the memory allocated by dlopen to store error messages for dlerror on Linux, which is apparently never freed.

You can get the latest release from the rra-c-util distribution page.