### Archive

Posts Tagged ‘debian’

## SMTP from Exim-equipped roaming notebook (SSH smarthost)

I don’t send email from my notebook often, dealing with my correspondence on my server machine via ssh. When I need to do it, it’s usually when I’m sending Git patches or something like that. I didn’t meet much trouble with sending it directly, but SMTP servers of Debian-involved people are some of the most picky one can meet and I decided it’ll be best if I switch the exim4 on my notebook to smarthost mode where all mail is relayed via my main server.

So that should be trivial to do, right? Wrong, apparently. I figured I’d use SMTP auth, but it just seems mind-bogglingly complicated to configure if you don’t want to spend an evening on it. The client part is fairly easy (probably both on exim4 and postfix), but setting up postfix server to do SMTP auth (for just a single person) is really silly stuff. Maybe not so crazy if you use PAM / shadow for authentication, but that means that on my notebook, I’d have to store (in plaintext) my server password anyone could use to log in – no way. It seems I could switch to Dovecot and somehow pass it a simple password to use, but at that point my patience ran out and I just backed off a litle.

Why not just use ssh for smarthost SMTP transport? Authentication via ssh is something everyone understands nowadays, it does the best job there, no silly passwords involved and you can just pipe SMTP through it. You wouldn’t do that at in a company setting with Windows notebooks, but for a single geek, it seems ideal.

Someone already did set up ssh as exim transport, but that’s for exim3. So here follows a super-quick HOWTO to do this with exim4:

• Set up ssh key on client:
sudo -u Debian-exim /bin/bash
ssh-keygen # go with the default, and empty password, this will be used in an automated way
ssh me@server.example.org # to fill up known_hosts; it will fail yet
cat ~/.ssh/id_rsa.pub # this is my public key
exit # ..the sudo

• Set up ssh key on server – paste the public key printed by the cat above to ~me/.ssh/authorized_keys and prepend command="nc -w1 localhost smtp",no-agent-forwarding,no-port-forwarding,no-X11-forwarding to the key line. This key can now be used only for mail relaying.
• Do dpkg-reconfigure exim4-config and configure smarthost mode. Also use it to find out whether you are using split or big configuration. You will also probably want to enable “mailname hiding”, otherwise your return-path will contain an unroutable address.
• Set up ssh transport in exim4 – add the following to the config file:
ssh_pipe:
debug_print = "T: ssh_pipe for smarthost delivery"
driver = pipe
path = "/bin:/usr/bin:/usr/local/bin"
command = "ssh me@server.example.org nc -w1 localhost smtp"
use_bsmtp
message_prefix = "HELO mynotebook.example.org\r\n"

(it would be nicer if we used the actual smarthost configuration option value and our notebook’s hostname instead of hardcoded strings, I guess).

• In the smarthost: section of the configuration file, replace transport = remote_smtp_smarthost with transport = ssh_pipe.
• /etc/init.d/exim4 reload and voilá, sending mail from anywhere should work now!

I *wish* setting up roaming SMTP nodes would be way easier nowadays and I wouldn’t have to eventually spend about 90 minutes on this stuff…

Categories: linux Tags:

## systemd: journal listing on /dev/tty12

Inspired by the Debian CTTE deliberations on the new default init for Debian, I installed systemd on my notebook after tonight’s forced reboot and played with it a little.

(And I like it! I was very sceptical when hearing about systemd first, but after reading a lot of discussions and trying it myself, I find most of the problematic points either fixed already or a load of FUD. The immediate big selling point for me is actually journald, it and its integration with systemctl is really awesome. I’ll actually find systemd more useful on servers than desktops, I think.)

While it’s a nice exercise for anyone wanting to get familiar with systemd, I still decided to share a tidbit – service file that will make log entries show up on /dev/tty12. Many people run with rsyslogd set up for this, you’ll want to disable that (by default, all journal entries are forwarded to rsyslog). The advantage of showing journal entries instead is mainly color coding. :)

The file listing follows, or get it here.

# Simple systemd service that will show journal contents on /dev/tty12
# by running journalctl -af on it.
# Install by:
#  - Saving this as /etc/systemd/system/journal@tty12.service
#  - Running systemctl enable journal@tty12
#  - Running systemctl start journal@tty12
# journald can also log on console itself, but current Debian version won't
# show timestamps and color-coding.
# systemd is under LGPL2.1 etc, this is inspired by getty@.service.

[Unit]
Description=Journal tail on %I
Documentation=man:journalctl(1)
After=systemd-user-sessions.service plymouth-quit-wait.service systemd-journald.service
After=rc-local.service

# On systems without virtual consoles, don't start any getty. (Note
# that serial gettys are covered by serial-getty@.service, not this
# unit
ConditionPathExists=/dev/tty0

[Service]
# the VT is cleared by TTYVTDisallocate
ExecStart=/bin/sh -c "exec /bin/journalctl -af > /dev/%I"
Type=idle
Restart=always
RestartSec=1
UtmpIdentifier=%I
TTYPath=/dev/%I
TTYReset=yes
TTYVHangup=yes
#TTYVTDisallocate=yes
TTYVTDisallocate=no
KillMode=process
IgnoreSIGPIPE=no

# Unset locale for the console getty since the console has problems
# displaying some internationalized messages.
Environment=LANG= LANGUAGE= LC_CTYPE= LC_NUMERIC= LC_TIME= LC_COLLATE= LC_MONETARY= LC_MESSAGES= LC_PAPER= LC_NAME= LC_ADDRESS= LC_TELEPHONE= LC_MEASUREMENT= LC_IDENTIFICATION=

[Install]
Alias=getty.target.wants/journal@tty12.service

(P.S.: Creating this service file – my very first one – took me 10 minutes total, including studying documentation and debugging two stupid mistakes I made.)

Categories: linux Tags:

## memtester and Virtual->Physical Address Translation

repo.or.cz server started having trouble with randomly corrupted repositories a while ago; short-time memtests were showing nothing and I was reluctant to take it offline for many days. So I found out the neat memtester tool and fired it up.

Sure enough, in some time, a bitflip error popped up – several times on the same memory offset:

Block Sequential : testing 12FAILURE: 0xc0c0c0c0c0c0c0c != 0xc0c0c0c0c0c0c0e at offset 0x0739610a.

Okay! That might hint on a bad memory cell in a DIMM. But which DIMM? Weelll…

We have to figure out which virtual address does the offset correspond with. Then, we have to figure out which physical address would that be. Finally, we have to guess the appropriate DIMM. We will make a lot of assumptions along the way – generally, the mapping can change all the time, pages may be swapped out, etc. – but memtester keeps the single mmap()ed region mlock()ed all the time, the architecture is regular i7, etc. And we don’t have to be 100% sure about the result.

First, keep the memtester running, do not restart it! Let’s assume its pid is 25773. First, we need to look at its memory maps:

# cat /proc/25773/maps
00400000-00403000 r-xp 00000000 09:00 19511804                    /usr/bin/memtester
00602000-00603000 rw-p 00002000 09:00 19511804                   /usr/bin/memtester
7fea279c9000-7fea279ca000 rw-p 7fea279c9000 00:00 0
7fea279ca000-7feb601ca000 rw-p 7fea279ca000 00:00 0
7feb601ca000-7feb60314000 r-xp 00000000 09:00 28713328    /lib/libc-2.7.so
...

We can see pretty much immediately that the 7fea279ca000-7feb601ca000 map is the memory region of the main testing buffer – it’s just huge!

# echo $((0x7fea279ca000-0x7feb601ca000)) -5242880000 Splendid – 5GiB, just as much as we told memtester to check. Now, what is the virtual memory address of the fault? memtester grabs the buffer, splits it in two halves, then fills both halves with identical stuff and then goes through them, comparing. The second address contained the faulty bit set, so the printed offset is within the second buffer; its start is at (0x7feb601ca000-0x7fea279ca000)/2 + 0x7fea279ca000 = 0x7feac3dca000, add up the offset 0x0739610a… but beware! The offset is in ulongs, which means we need to multiple it by 8. and we get 0x7feacb16010a. The 0x7feacb16010a should be the faulty address! *** EDIT *** – this turns out to be wrong! There are two reasons: 1. The offset is actually in multiples of sizeof(unsigned long), which is 8 on 64-bit archs; multiply the number by 8. 2. There is some other problem – some slight shift. In my experiments, 0x7f1bd4f71a40 would be the real address but the computed one came out as 0x7f1bd4f72238 – not sure what causes that. Therefore, the best solution is to apt-source memtester and tweak tests.c to print out also the actual pointers of the fault. *** END EDIT *** – the rest should work as described again. Ok. How to get the physical address? New Linux kernels have a nifty invention – /proc/.../pagemap – that provides access to per-page mapping information for the whole process virtual space; see Documentation/vm/pagemap.txt for details. Unfortunately, accessing it is not so simple, but I hacked together a simple Perl script: #!/usr/bin/perl # (c) Petr Baudis 2010 &lt;pasky@suse.cz&gt; # Public domain. # This won't work on 32-bit systems, sorry. use warnings; use strict; use POSIX; our ($pid, $vaddr); ($pid, $vaddr) = @ARGV; open my$pm, "/proc/$pid/pagemap" or die "pagemap:$!";
binmode $pm; my$pagesize = POSIX::sysconf(&amp;POSIX::_SC_PAGESIZE);
my $ofs = int((hex$vaddr) / $pagesize) * 8; seek$pm, $ofs, 0 or die "seek$vaddr ($pagesize *$ofs): $!"; read$pm, my $b, 8 or die "read$vaddr ($pagesize *$ofs): $!"; my$n = unpack "q", $b; # Bits 0-54 page frame number (PFN) if present # Bits 0-4 swap type if swapped # Bits 5-54 swap offset if swapped # Bits 55-60 page shift (page size = 1&lt;&lt;page shift) # Bit 61 reserved for future use # Bit 62 page swapped # Bit 63 page present my$page_present = ! ! ($n &amp; (1 &lt;&lt; 63)); my$page_swapped = ! ! ($n &amp; (1 &lt;&lt; 62)); my$page_size = 1 &lt;&lt; (($n &amp; ((1 &lt;&lt; 61) - 1)) &gt;&gt; 55); if (!$page_present and !$page_swapped) { printf "[%s: %d * %d] %x: not present\n",$vaddr, $pagesize,$ofs, $n; exit; } if (!$page_swapped) {
my $pfn = ($n &amp; ((1 &lt;&lt; 55) - 1));
printf "[%s: %d * %d] %x: present %d, size %d, PFN %x\n", $vaddr,$pagesize, $ofs,$n, $page_present,$page_size, $pfn; } else { my$swapofs = (($n &amp; ((1 &lt;&lt; 55) - 1)) &gt;&gt; 5); my$swaptype = ($n &amp; ((1 &lt;&lt; 5) - 1)); printf "[%s: %d * %d] %x: present %d, size %d, swap type %x, swap offset %x\n",$vaddr, $pagesize,$ofs, $n,$page_present, $page_size,$swaptype, $swapofs; } Fire this up, and you should see something like: # perl ~pasky/pagemaplist.pl 25773 0x7feacb16010a Hexadecimal number > 0xffffffff non-portable at /home/pasky/pagemaplist.pl line 18. [0x7feacb16010a: 4096 * 274700012288] 860000000002adf7: present 1, size 4096, PFN 2adf7 PFN stands for Page Frame Number. To get physical address on a PC from this, just multiply it by page size. The physical address should be 0x2adf7000. So, which DIMM do we have to replace? This is the most problematic stpe, it does not seem that the mapping would be available anywhere. Let’s look at the physical mappings available in total: # dmesg | head -n 20 [ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Linux version 2.6.26-2-amd64 (Debian 2.6.26-21lenny4) (dannf@debian.org) (gcc version 4.1.3 20080704 (prerelease) (Debian 4.1.2-25)) #1 SMP Tue Mar 9 22:29:32 UTC 2010 [ 0.000000] Command line: root=/dev/md0 ro quiet [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: 0000000000000000 - 000000000009fc00 (usable) [ 0.000000] BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved) [ 0.000000] BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) [ 0.000000] BIOS-e820: 0000000000100000 - 00000000bf780000 (usable) [ 0.000000] BIOS-e820: 00000000bf78e000 - 00000000bf790000 type 9 [ 0.000000] BIOS-e820: 00000000bf790000 - 00000000bf79e000 (ACPI data) [ 0.000000] BIOS-e820: 00000000bf79e000 - 00000000bf7d0000 (ACPI NVS) [ 0.000000] BIOS-e820: 00000000bf7d0000 - 00000000bf7e0000 (reserved) [ 0.000000] BIOS-e820: 00000000bf7ec000 - 00000000c0000000 (reserved) [ 0.000000] BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved) [ 0.000000] BIOS-e820: 00000000ffc00000 - 0000000100000000 (reserved) [ 0.000000] BIOS-e820: 0000000100000000 - 0000000200000000 (usable) [ 0.000000] Entering add_active_range(0, 0, 159) 0 entries of 3200 used ... There is 7GB available on the machine and that can be easily found to correspond to the 0000000000100000 - 00000000bf780000 3GiB and 0000000100000000 - 0000000200000000 4GiB ranges. Our physical address is very low in the range. I guess the best we could do is assume that the DIMMs provide mappings in the same order they are in the slot, and replace the first one… Or, you could use dmidecode, but only if your BIOS is not broken like mine and actually reports the start/stop addresses. :( Categories: Tags: ## Fixing NTP Refusing to Sync April 26th, 2010 8 comments I have just been confronted by NTP absolutely refusing to touch my system’s clock. The trouble with NTP is that it is absolute PITA to debug it at all since when it does not get in sync with its peers, it goes at great lengths to make its reasons as incomprehensible as possible. For some reason, my system had absolutely massive drift – something in the order of half a second per minute, making the clock drift by several tens of minutes per day. So I installed NTP and hoped that it would magically fix up the issue, but it turns out that NTP by itself is absolutely unhelpful not only in cases of big offset, but also in cases of big drift – it will fix your clock when it is slightly inaccurate, but not when it is inaccurate a lot (…that is, when you would want to use it all the more). First thing I did was check the hardware’s opinion. Comparing date and hwclock --show has shown that the hardware clock is doing fine, only kernel’s idea of time is drifting off. Next, it’s time to see what NTP thinks about its peers: ntpq> peers remote refid st t when poll reach delay offset jitter ============================================================================== tik.cesnet.cz .GPS. 1 u 12 64 377 0.641 8494.05 2911.29 tak.cesnet.cz .GPS. 1 u 2 64 377 0.636 8594.86 2945.05 NTP polls each peer every “poll” seconds, “when” is relative time of last poll; “reach” keeps track of last successful polls, 377 is best. “Delay” is network delay, this is fine. “Offset” is the offset between local and peer clock, it’s at 8.5s now – not so good, but trouble is it gets bigger quickly. But what’s the real culprit is “jitter” – it’s huge! This means that the variance of offsets is huge – to put it simply, the offset is very different each time it is measured. Since no symbols are printed in the first column of the output, there is no peer synchronization going on. So if we know a lot about NTP already, the high jitter should hint us that the offset measurements are unreliable. But the network connection of our server is very good, it would be nice to look at the actual measurements. Instead of peers, let’s look at their associations: ntpq> as ind assID status conf reach auth condition last_event cnt =========================================================== 1 55713 9014 yes yes none reject reachable 1 2 55714 9014 yes yes none reject reachable 1 NTP is not liking our peers. No surprise, with the big jitter. But what we are after are the assID numbers: ntpq> rv 55713 assID=55713 status=9014 reach, conf, 1 event, event_reach, srcadr=tik.cesnet.cz, srcport=123, dstadr=195.113.20.142, dstport=123, leap=00, stratum=1, precision=-20, rootdelay=0.000, rootdispersion=0.000, refid=GPS, reach=377, unreach=0, hmode=3, pmode=4, hpoll=6, ppoll=6, flash=400 peer_dist, keyid=0, ttl=0, offset=13041.231, delay=0.602, dispersion=0.944, jitter=2918.331, reftime=cf803b51.ddd3e70e Mon, Apr 26 2010 18:18:25.866, org=cf803b83.e9b29181 Mon, Apr 26 2010 18:19:15.912, rec=cf803b76.df382c7c Mon, Apr 26 2010 18:19:02.871, xmt=cf803b76.df0d40c7 Mon, Apr 26 2010 18:19:02.871, filtdelay= 0.60 0.64 0.60 0.51 0.82 0.67 0.69 0.64, filtoffset= 13041.2 12385.8 11720.4 11075.2 10409.6 9774.54 9129.22 8494.06, filtdisp= 0.00 0.98 1.97 2.93 3.92 4.86 5.82 6.77 Looking at the last three lines, the reason for the huge jitter finally seems clear! Our clock drifts so fast that the offset will go up by several seconds through our few measurements. Unfortunately, NTP does not seem to be giving us the actual estimated drift value between local clock and the peer. This would be very useful since that’s actually what makes NTP decide whether go ahead and sync or keep its hands away from the clock; it is said that 500ppm is the max. drift value for possible synchronization, but I don’t know how to connect that to any of the other numbers I see; when the clock is already in sync, it is probably the ‘frequency’ value in ‘rv’ (and it is stored in the drift file), but this value stays untouched before synchronization. Too bad. So, now we know the issue is that kernel clock is going too slow and that NTP is not going to fix it for ourselves. So, we must resort to manual tinkering using adjtimex: # adjtimex -p mode: 0 offset: 0 frequency: 0 maxerror: 0 esterror: 0 status: 64 time_constant: 4 precision: 1 tolerance: 32768000 tick: 9900 raw time: 1272299204s 17444us = 1272299204.017444 return value = 5 Wow, a lot of numbers. But the one that tells how fast the clock is going is the ‘tick’ value, and you can adjust it using adjtimex -t 10000 – that will make the clock go a lot faster, and is also sort-of canonical value. Let’s just do that and restart ntpd: remote refid st t when poll reach delay offset jitter ============================================================================== tik.cesnet.cz .GPS. 1 u 1 64 7 0.659 16852.5 1.840 tak.cesnet.cz .GPS. 1 u 2 64 7 0.665 16852.5 1.863 This is MUCH better! In fact, after few minutes NTP will decide to step the clock to compensate the offset, and after another while it will finally get in sync with the peers. If the jitter is still too big (but different), keep tweaking the tick value. EDIT: It seems that alternatively, you can try to change your clock source – this might help especially in case of virtualization: # cat /sys/devices/system/clocksource/clocksource0/available_clocksource hpet acpi_pm jiffies tsc # cat /sys/devices/system/clocksource/clocksource0/current_clocksource hpet Hope this helps if your NTP also refuses to fix your clock. Open questions remain: • Why was my tick value so off? I guess I will never know. Maybe a reboot would fix it too, but I wasn’t keen to do that. • How to determine drift-per-peer value to see how much out of bounds it is? • How to make NTP automatically fix even huge drifts? • Why is NTP crafted to be so hard to debug without spending tens of minutes googling, staring at bunches of floats and decoding bitmasks manually? Thanks to prema and otis for ideas and help. Categories: linux Tags: ## Putting new X.org version on Debian stable (lenny) November 26th, 2009 No comments At our university department, we have a reasonably large deployment of Debian installations that are all (almost) the same software-wise, but quite diverse hardware-wise (we buy few new computers each year, and get rid of the oldest ones). Our users are fairly conservative (except wrt. the ‘ida’ package, for some reason) and we have quite a few local tweaks, so even though these are desktop machines, we follow Debian stable and it’s ideal for us – it takes us quite long to test, tweak and debug new release before an upgrade. Of course, there is a catch – new computers have graphic cards that lenny simply cannot cope with anymore. And if you want new drivers, you need new xorg version. There are no official backports. So you are faced with installing xorg from testing (squeeze), but this is a fairly large-scale operation: your libc6 package and other base libraries will be upgraded, your keyboard/console configuration will change, etc. Especially the library upgrade is troublesome, since in order to stay binary-compatible across the whole department, we would need to install libc6 etc. from squeeze on *all* our machines. It is not very likely significant breakage of these packages would go through to testing, but there are risks and overally it adds significant overhead to the task. Thankfully, there is a neat alternative solution – add Ubuntu to the repository cauldron! Ubuntu Jaunty is very similar to Debian Lenny package-wise, and in fact not even libc upgrade is necessary. Only a fairly isolated set of xorg-related packages will be upgraded, which seems ideal for the purpose. First, we need to add extra repositories to our Debian stable system. We will need both jaunty and squeeze – this is a mystery, AFAICT no packages from squeeze are being installed during the process, but the squeeze repository is needed for APT to figure out the upgrade path. Make sure you will stay in stable on general – add this to /etc/apt/apt.conf (create it if necessary): APT::Default-Release "stable"; Add this to /etc/apt/sources.list: deb http://ftp.cz.debian.org/debian/ squeeze main non-free contrib deb-src http://ftp.cz.debian.org/debian/ squeeze main non-free contrib deb http://archive.ubuntu.com/ubuntu/ jaunty main restricted deb-src http://archive.ubuntu.com/ubuntu/ jaunty main restricted Not to worry, apt will keep operating on stable unless you explicitly tell it otherwise. Which we shall do right now: apt-get install -t jaunty xserver-xorg-video-ati xserver-xorg-video-radeonhd \ xserver-xorg-video-all xserver-xorg-input-kbd xserver-xorg-input-mouse \ xserver-xorg This set of packages is crafted for our installations and so that APT allows the upgrade, you will perhaps need to tweak it slightly; you certainly will want to add more input drivers if you are doing this on a notebook. We mess with the input packages since xserver-xorg-input-wacom would pull in newer libc6 package. Carefully review the installation proposal before agreeing to it, of course. Voila, you should have new X.org version with current video drivers on your system now! Perhaps if we were starting from scratch, Ubuntu LTS releases would be a good option to consider since they keep hardware support up-to-date. However, moving to Ubuntu nowadays would be tedious, and we don’t like various Ubuntu fancy desktop stuff, being conservative UNIXy persons. Categories: linux Tags: ## acroread: ERROR: Cannot find installation directory August 12th, 2009 4 comments After last apt-get upgrade on squeeze, I started getting this when starting acroread. In case you get this too, edit your /usr/lib/Adobe/Reader9/bin/acroread-en and sync the$ver assignment on the top to be the same as your /usr/lib/Adobe/Reader9/Reader/AcroVersion.

It seems that debian-multimedia acroread packaging is rather broken; the acroread-en script is provided by acroread-debian-files package which has a random 0.version and is not bound to a particular acroread version, but hardcodes this dependency on a particular acroread-data package version.

Christopher Marillat explained to me that acroread in testing has been removed due to CVE issues. If only the implicit dependency of acroread-debian-files would be hardcoded in the packaging system, things wouldn’t break when such things happen.

Categories: linux Tags:

## ucwcs on Debian testing/unstable

Very long time ago, several UCW sages congregated on the ultimate Czech programmer layout – they ended up with a US keyboard just like usual, except that capslock doesn’t lock case (good in itself!) but acts as a second shift, adding diacritic mark on the letter. So caps+s produces š, caps+e produces é and caps+w produces ě. I have been using that ever since, but when moving to another system, setting this up always was a bit of a challenge.

But currently, the X11 Czech ucw layout is easy to set up on squeeze/sid (I’m currently using squeeze with xorg from sid – works fine). Xorg keyboard configuration is HAL-driven on sid currently, what you need to do is put this to /etc/default/console-setup:

XKBLAYOUT="us,cz"
XKBVARIANT=",ucw"
XKBOPTIONS="grp:caps_switch"

(Though, the way this works, using two groups, I’m not quite sure how to add another layout with reasonably seamless switching. Not my problem, but some other users might want that.)

Categories: linux Tags:

## Fancy OpenVPN auto-setup script for Debian Lenny

I spent last night fighting with Debian madness, creating a script that will automatically set up OpenVPN access to our department network on Debian Lenny machines, including NetworkManager integration, gid-based default route selection (have two firefoxes running, one for normal browsing, another for VPN browsing) and sending mails from anywhere over the VPN.

Two things took me quite some time to debug. :( I had to divert nm-openvpn-service-openvpn-helper binary in order to be able to hook up there; apparently, scripts in /etc/NetworkManager/dispatcher.d and /etc/network/ifup.d/ aren’t called during VPN setups (probably a bug fixed in newer nm versions).

Another insight taught me at least a good lesson about debconf philosophy – I spent probably *hours* trying to feed debconf my new exim4-config setup (using $DEBCONF_DB_FALLBACK,$DEBCONF_DB_OVERRIDE, debconf-set-selections, …) but dpkg-reconfigure kept ignoring them and instead rewriting the debconf database according to the old defaults. Only then it dawned to me that the defaults are actually what it has read back from /etc/exim4 and that it’s supposed to ignore whatever is in the debconf database if a corresponding configuration is available in /etc already, actually. So were I not debconf-considerate and just plainly rewrote the /etc files at the start, everything would work magically. Well, I know that I can be brutal to my Debian now. ;-)

(On a semi-related note, I’m much happier openSUSE 11.1 notebook user now that I use nm-applet instead of knetworkmanager in my KDE3 environment.)

Categories: linux Tags:

## OpenFrameworks-induced CCV hell

4am is approaching and I’m very frustrated. I’m building a Microsoft Surface style multitouch screen, and maybe after all the software part will be more challenging than hardware… (more on the hardware some other day)

Quick intro to multitouch: You have a sheet of (plexi)glass on top of a box, and behind it an IR camera and a projector (or just IR camera if you aren’t going to show anything on the surface yet). Usually, you also have a bunch of IR leds shining in certain way (setups differ here), and your fingers touching the surface create light blobs in the IR camera input, and a software processes these blobs and converts them to useful input.

Now, unless you have fancy hardware setup, this input is very low quality and you want some good software to make sense of the blobs. CCV (Community Core Vision, formerly “tbeta”) is quite a fancy piece of software that should make your hardware prototypes easy to setup and experiment with, giving somewhat useful results even for extremely rudimentary setups. The problem is getting it talk to your IR cam on Linux, of course. Enter: hell.

tbeta-1.1 will refuse to select the correct pixel format for my camera, and there is about nothing I can do about it since it is closed-source. Now, the awesome folks of NUIGroup released the source as CCV-1.2, and I really appreciate that even though I sound frustrated from the build problems. The binary version will see no video devices at all – nada, none. Now, CCV is build around OpenFrameworks, which uses about 4357 other libraries. Some of them unpackaged, yay. And the only way to build it is using a Code::Blocks IDE (never heard of it before, either).

I will try to sum up the required changes to build everything on Debian on the NUIGroup forums and put a link here. It was very frustrating journey, also because I had to dust off my anyway-nonexistant Debian packaging skills and met utter user-unfriendliness here – I wanted to use dh_make and dpatch to do all the hard work, and it appeared to do so. Back-stabbing me with two mysterious weirdnesses (if it at least wouldn’t look so friendly, I would read the docs more carefully or look on): dpatch-edit-patch requires -0 argument to do anything actually useful, and dh-make will prepare the install rule with #dh_install commented out, making you scratch your head on mysteriously empty generated packages.

So, in the end I managed to so-so package the oscpack library, after heavily patching it to even make it compile(!)… CCV has so many library dependencies that it ships with many of the libraries included in binary form – but only 32bit, and I’m compiling on 64bit. So far before going to bed I ended up at:

../../../libs/FreeImage/libfreeimage.a(BitmapAccess.o): In function .L309':
BitmapAccess.cpp:(.text+0xc10): undefined reference to operator new(unsigned int)'

I’m ending up just heavily editing the build project and wondering whether it would be best to just write some makefiles. I really do wonder if I manage to build this devious thing eventually, but I’m growing determined. I kind of look forward to contribute patches to clean this stuff up.

Categories: Tags:

## Future of X11 (at least in Debian)

May 10th, 2009 1 comment

Just a random snippet from #debian-x to cheer everyone up…

11:51 < kampasky> it's nice there is example xorg.conf in /usr/share/doc/xserver-xorg/, but not very useful since users will never know about it until at least that fact is mentioned in /etc/X11/xorg.conf :)
11:54 < jcristau> kampasky: /etc/X11/xorg.conf won't exist.
12:00 < kampasky> Ok, will there be at least /etc/X11/README explaining the situation (and if possible also how to change device configuration using hal fdi snippets etc.)?
12:01 < jcristau> no
12:01 < jcristau> /etc is the place for system configuration, not for docs