speedread – A simple terminal-based open source Spritz-alike.

March 2nd, 2014 No comments

A few hours ago, I have read about Spritz, a new speed-reading app, and I was quite impressed by the idea. I know the underlying concept is not new and I didn’t even try out Spritz itself (they announce their idea but not release the software, huh?), but this was the first time I have heard about it and I really liked it!

So I decided to implement my own terminal version of this idea, acting as a regular command-line filter. Find the new tool speedread at:

https://github.com/pasky/speedread

(Yes, it may not work well for beletry. Or slides. Yes, it may not work well for emails that you want to just skim for keywords. But then there’s the other 80% of text I need to chew through that does not fall in either category. I’ll have to continue trying it out for longer but it might be really useful.)

Meanwhile, I have also learned about OpenSpritz, a web-based implementation of Spritz. Can be a good match for speedread!

Categories: software Tags: , , , , ,

SMTP from Exim-equipped roaming notebook (SSH smarthost)

February 13th, 2014 No comments

I don’t send email from my notebook often, dealing with my correspondence on my server machine via ssh. When I need to do it, it’s usually when I’m sending Git patches or something like that. I didn’t meet much trouble with sending it directly, but SMTP servers of Debian-involved people are some of the most picky one can meet and I decided it’ll be best if I switch the exim4 on my notebook to smarthost mode where all mail is relayed via my main server.

So that should be trivial to do, right? Wrong, apparently. I figured I’d use SMTP auth, but it just seems mind-bogglingly complicated to configure if you don’t want to spend an evening on it. The client part is fairly easy (probably both on exim4 and postfix), but setting up postfix server to do SMTP auth (for just a single person) is really silly stuff. Maybe not so crazy if you use PAM / shadow for authentication, but that means that on my notebook, I’d have to store (in plaintext) my server password anyone could use to log in – no way. It seems I could switch to Dovecot and somehow pass it a simple password to use, but at that point my patience ran out and I just backed off a litle.

Why not just use ssh for smarthost SMTP transport? Authentication via ssh is something everyone understands nowadays, it does the best job there, no silly passwords involved and you can just pipe SMTP through it. You wouldn’t do that at in a company setting with Windows notebooks, but for a single geek, it seems ideal.

Someone already did set up ssh as exim transport, but that’s for exim3. So here follows a super-quick HOWTO to do this with exim4:

  • Set up ssh key on client:
    sudo -u Debian-exim /bin/bash
    ssh-keygen # go with the default, and empty password, this will be used in an automated way
    ssh me@server.example.org # to fill up known_hosts; it will fail yet
    cat ~/.ssh/id_rsa.pub # this is my public key
    exit # ..the sudo
    
  • Set up ssh key on server – paste the public key printed by the cat above to ~me/.ssh/authorized_keys and prepend command="nc -w1 localhost smtp",no-agent-forwarding,no-port-forwarding,no-X11-forwarding to the key line. This key can now be used only for mail relaying.
  • Do dpkg-reconfigure exim4-config and configure smarthost mode. Also use it to find out whether you are using split or big configuration. You will also probably want to enable “mailname hiding”, otherwise your return-path will contain an unroutable address.
  • Set up ssh transport in exim4 – add the following to the config file:
    ssh_pipe:
      debug_print = "T: ssh_pipe for smarthost delivery"
      driver = pipe
      path = "/bin:/usr/bin:/usr/local/bin"
      command = "ssh me@server.example.org nc -w1 localhost smtp"
      use_bsmtp
      message_prefix = "HELO mynotebook.example.org\r\n"
      delivery_date_add
      envelope_to_add
    

    (it would be nicer if we used the actual smarthost configuration option value and our notebook’s hostname instead of hardcoded strings, I guess).

  • In the smarthost: section of the configuration file, replace transport = remote_smtp_smarthost with transport = ssh_pipe.
  • /etc/init.d/exim4 reload and voilá, sending mail from anywhere should work now!

I *wish* setting up roaming SMTP nodes would be way easier nowadays and I wouldn’t have to eventually spend about 90 minutes on this stuff…

Categories: linux Tags: , , ,

systemd: journal listing on /dev/tty12

February 12th, 2014 3 comments

Inspired by the Debian CTTE deliberations on the new default init for Debian, I installed systemd on my notebook after tonight’s forced reboot and played with it a little.

(And I like it! I was very sceptical when hearing about systemd first, but after reading a lot of discussions and trying it myself, I find most of the problematic points either fixed already or a load of FUD. The immediate big selling point for me is actually journald, it and its integration with systemctl is really awesome. I’ll actually find systemd more useful on servers than desktops, I think.)

While it’s a nice exercise for anyone wanting to get familiar with systemd, I still decided to share a tidbit – service file that will make log entries show up on /dev/tty12. Many people run with rsyslogd set up for this, you’ll want to disable that (by default, all journal entries are forwarded to rsyslog). The advantage of showing journal entries instead is mainly color coding. :)

The file listing follows, or get it here.

# Simple systemd service that will show journal contents on /dev/tty12
# by running journalctl -af on it.
# Install by:
#  - Saving this as /etc/systemd/system/journal@tty12.service
#  - Running systemctl enable journal@tty12
#  - Running systemctl start journal@tty12
# journald can also log on console itself, but current Debian version won't
# show timestamps and color-coding.
# systemd is under LGPL2.1 etc, this is inspired by getty@.service.

[Unit]
Description=Journal tail on %I
Documentation=man:journalctl(1)
After=systemd-user-sessions.service plymouth-quit-wait.service systemd-journald.service
After=rc-local.service

# On systems without virtual consoles, don't start any getty. (Note
# that serial gettys are covered by serial-getty@.service, not this
# unit
ConditionPathExists=/dev/tty0

[Service]
# the VT is cleared by TTYVTDisallocate
ExecStart=/bin/sh -c "exec /bin/journalctl -af > /dev/%I"
Type=idle
Restart=always
RestartSec=1
UtmpIdentifier=%I
TTYPath=/dev/%I
TTYReset=yes
TTYVHangup=yes
#TTYVTDisallocate=yes
TTYVTDisallocate=no
KillMode=process
IgnoreSIGPIPE=no

# Unset locale for the console getty since the console has problems
# displaying some internationalized messages.
Environment=LANG= LANGUAGE= LC_CTYPE= LC_NUMERIC= LC_TIME= LC_COLLATE= LC_MONETARY= LC_MESSAGES= LC_PAPER= LC_NAME= LC_ADDRESS= LC_TELEPHONE= LC_MEASUREMENT= LC_IDENTIFICATION=

[Install]
Alias=getty.target.wants/journal@tty12.service

(P.S.: Creating this service file – my very first one – took me 10 minutes total, including studying documentation and debugging two stupid mistakes I made.)

Categories: linux Tags: , ,

GPS souřadnice českých měst a obcí

February 1st, 2014 1 comment

Pro zobrazování poloh dopadů meteosond na IRC jsem potřeboval v jednoduchém CSV formátu seznam souřadnic českých měst, ale ukázalo se, že je překvapivě obtížné něco takového získat. Sice existuje tabulka na jednom astronomickém webu, výběr tam zahrnutých obcí je ale docela divný, někde je místo obce jen její část, atd.

Nakonec jsem zvolil postup “udělej si sám”, a to kombinací seznamu na Wikipedii, Google Geocoding API a trochy XPath.

Seznam rozumné podmnožiny měst mohu získat třeba pomocí:

curl 'http://cs.wikipedia.org/w/index.php?title=Seznam_obc%C3%AD_s_roz%C5%A1%C3%AD%C5%99enou_p%C5%AFsobnost%C3%AD&action=edit' |
  sed -ne 's/^# \[\[\([^]|]*|\)*\([^]]*\)\]\].*/\2/p' | sort

Mám-li zase jméno obce, její souřadnice mohu získat tímto zaklínadlem:

m=Aš; curl -s 'http://maps.googleapis.com/maps/api/geocode/xml?address='"${m// /+},+CZ"'&sensor=false' |
  xmllint --xpath '//location[lat or lng]//text()' -

(Důležitý trik je to ,CZ, jinak bude Google znát spoustu Kolínů a Aš bude znamenat Americká Samoa. Alternativně si můžete z výsledků vyfiltrovat ty české pomocí XPath //result[address_component/short_name/text()="CZ"]/geometry/location[lat or lng]//text().)

Teď už to pro vygenerování jednoduchého CSV stačí spojit dohromady:

curl 'http://cs.wikipedia.org/w/index.php?title=Seznam_obc%C3%AD_s_roz%C5%A1%C3%AD%C5%99enou_p%C5%AFsobnost%C3%AD&action=edit' |
  sed -ne 's/^# \[\[\([^]|]*|\)*\([^]]*\)\]\].*/\2/p' | sort |
  while read m; do
    echo -n $m
    curl -s 'http://maps.googleapis.com/maps/api/geocode/xml?address='"${m// /+},+CZ"'&sensor=false' |
      xmllint --xpath '//location[lat or lng]//text()' - |
      tr -s '\n' ' ' | tr ' ' ','
    echo
    sleep 0.1
  done | sed 's/,$//'

Rádi byste hotové CSV?

Bonus: Podobně vygenerované CSV s pražskými částmi (katastrálními územími).

Categories: linux Tags: , , , , ,

Brmson / BlanQA

January 27th, 2014 No comments

I have recently been dabbling in Natural Language Processing, in particular Question Answering. I have been fascinated by the success of IBM Watson and have gradually came to believe that this technology can serve as a great basis of autonomous agents operating in the complex world of human knowledge. (I later came across Project Aristo – I’m not alone.) This approach, compared to projects like OpenCog that aim to create autonomous agents understanding and operating in the physical world, seems to offer many advantages – but let’s talk about that some other time.

Let’s say we wanted to take a stab on approximating IBM Watson with easily available technology, in “at home” conditions (or rather, “at hackerspace” – I gave this aim a temporary callsign “Project Brmson”). What’s the best we can do?

So I took a look at the current open source question-answering technologies and found – well, just one, and none that would be immediately usable by anyone. I have put together a short survey of the current landscape.

The only OSS framework I found that (i) could be used with not-so-many modifications to produce something functional, and (ii) would be a good base to build a truly good system on, is OAQA / OpenQA. It seems appealing from multiple viewpoints – it builds on the UIMA unstructured data processing platform which is also at the basis of IBM Watson, it originates at CMU which collaborated with IBM in this area; and, well, it’s the only platform that already exists anyway, so it’s a good starting point for someone who has no prior clue about the field. A honorable mention goes to OpenEphyra, basically a non-UIMA OAQA predecessor by the same institution; it’s not a good base to use for new systems, but can be sourced for a lot of NLP functionality.

In my first stab, I looked if there is actually a working QA system built on top of OAQA, and the answer was non-obvious. There is a helloqa project, but its master branch can currently do nothing useful. However, there is also a prototype branch that can actually answer some terrorism-related questions! It doesn’t work out of the box, but our fork does if you follow the instructions. But overally the project seems to be a bit of a hack and not a good base for a universal system usable by anyone but the original author.


So I set out to rewrite the helloqa-prototype from scratch on top of OAQA and build a different, clean and extendable QA pipeline (that shares bits of the original code and is much simpler). Thus, behold the project BlanQA! :-)

BlanQA is focused on universality, practicality and user-friendliness. That means there is a relatively detailed documentation and easy to follow installation instructions (try BlanQA out yourself!). By default, BlanQA offers interactive mode and will answer on top of Project Gutenberg corpus; but you can also connect it to IRC (#brmson @ freenode) or run on top of Wikipedia.

BlanQA is still a very stupid program at this point. It gets the answer right about 10-30% of the time, depending on how nicely you ask. But it’s more important as a base on top of which you can add clever algorithms (the smartest parts of BlanQA are currently outsourced from the OpenEphyra project, mainly guessing the type of the answer – is it a person? location? amount of something?). And if you want an OSS question-answering engine now, BlanQA is where to turn!


I want to develop this further, but the way ahead remains a little unclear. The thing is, OAQA appears to have significant architectural problems, as I realized while I continued hacking BlanQA and learning more about both OAQA and the UIMA framework it builds on top of. The rest of this section is a bit technical, c.f. also a quick intro to BlanQA architecture.

The basic UIMA principle is that each artifact (in this case: question, document/passage, answer) should have its own CAS (“piece of data” with a set of annotations and other featuresets derived from it) with a dedicated type system and appropriate Sofa (view of this piece of data). This would enable easy creation of stand-off annotations of e.g. fetched documents.

However, the OAQA model works with just a single CAS that has just the question text set as a Sofa and then a variety of types mashed together, partitioned only into phase-based views. This seems to me as a substantially less appealing option – it doesn’t allow to use third-party UIMA annotators that expect their subject to be the Sofa, it might be harmful for scaleout and it seems generally awkward to use; I actually have hard time seeing what advantages does using UIMA bring on the table in this model.

So it seems the way forward for BlanQA (or likely a differently-named successor) is to break away of OAQA and build directly on top of UIMA (possibly with a hacked version of uima-ecd that supports multiple CAS, but that seems as a bit intimidating proposition).


Tue Jan 28 2014 update: Note that we have started work on a new Question Answering engine YodaQA built on UIMA from scratch.

Categories: software Tags: , , , , , ,

Weathersonde – Nearby Landing Notification

January 26th, 2014 No comments

At our hackerspace brmlab, one of the things we do is picking up landed weather sondes. In short, fun hardware literally falling off the sky, several times a day, every day. These are stratospheric balloons used for weather data prediction, launched from various sites, that reach the 35km altitude, then the balloon bursts and it lands back on the ground at a random location. At the whole time, it transmits its current GPS coordinates via radio, making this a rather exciting sub-class of geocaching.

As a simple hack today (idea by chido), I created a simple script sonde.sh that is designed to be run three times a day, runs sonde trajectory prediction (a predict.habhub.org service – example) and if the sonde is predicted to land in a certain radius, reports that with a link to the prediction. By default, it is connected to jendabot, one of our brmlab IRC robotic minions, written in an appealingly crazy way as a collection of bash scripts.

Categories: life, software Tags: , , , , , ,

Mice TV!

January 20th, 2014 No comments

Chido has two mouse-pets (Acomys Caihirinus, actually) and we finally did what we already planned to do long ago:

vlc http://pasky.or.cz:8090/mouses.flv

A live video stream of our mouse palace!


Some technical trivia – the IP cam used is Edimax IC-3110 (it’s pretty crappy, not recommended) and we are restreaming using vlc invocation:

cvlc -L --sout "#transcode{vcodec=mp4v,vb=1024,scale=1}:duplicate{dst=http{mux=ts,dst=:8090/mouses.flv},select=noaudio}" --no-sout-rtp-sap --no-sout-standard-sap --sout-ts-shaping=1000 --sout-ts-use-key-frames --ttl=40 rtsp://admin:PASSWORD@192.168.6.X:554/ipcam.sdp

(I did not get h264 FLV stream working reliably, unfortunately. I tried #transcode{vcodec=h264,venc=x264{keyint=20},vb=4096,scale=1} and duplicate{dst=http{mux=ffmpeg{mux=flv},dst=...,select=noaudio}.)

Categories: linux Tags: , , ,

Playing MP3 on Raspberry Pi with low latency

May 11th, 2013 1 comment

One commercial project I was working on for Raspberry Pi involved playing various MP3 samples when a button is pushed. The original implementation used mplayer to play back the samples, however the issue is that there was up to 1500ms latency between mplayer was executed and start of playback.

I didn’t do detailed profiling, but I think two factors causing high latency of mplayer were that (i) just loading all the .so libraries mplayer depends on can take many hundreds of milliseconds (ii) the file is being scanned for whatever stuff, streams detected etc. and that can also take some extra time; perhaps I could force mplayer to realize this is a simple MP3 file, but (i) is still the much bigger factor.

I wanted to avoid recoding all the samples to wav. That would allow me to use aplay directly and the playback starts immediately, but it would also feel really silly; decoding of MP3 is not the bottleneck, just the latency of mammoth software loading and initializing itself is. I also didn’t try mpd as that might have been a bit painful to set up.

Another point worth noting is that I didn’t use the crappy on-board PWM audio but a $3 chinese USB soundcard (which is still much better than PWM audio). And using reasonably up-to-date Raspbian Wheezy. So I tried…

  • mplayer -slave -idle, started in parallel with my program and receiving commands via FIFO. It hangs after the first file (even though it works fine when ran without -slave).
  • cmus running in parallel with my program, controlled by cmus-remote. Convincing it to use ALSA device of my choice was really hard, but eventually I managed, only to hear my files sped up about 20x.
  • madplay I couldn’t convince about using a non-default ALSA device at all.
  • mpg123 started immediately and could play back the MP3 files on a non-default ALSA device. Somehow, the quality was very low though (telephone grade) and there was an intense high-pitched clip at the end of the playback.
  • mpg321 I couldn’t convince to produce any sound and anyway it had about 800ms latency before playback started, probably due to its libao dependency.
  • sox, or rather AUDIODEV=hw:1 play worked! (After installing a package with MP3 support for sox.) No latency, normal quality, no clips, no hangs. Whew.

Verdict: There still is a software on Linux that can properly and quickly play MP3 files on Raspberry Pi, though it was a challenge to find it. I didn’t think of sox at first and I was almost giving up hope. BTW, normally you would use sox and play for applying a variety of audio transformations and effects in a batch/pipeline fashion and it can do a lot of awesome magic.

Categories: linux, software Tags: , , , , , ,

Short minutes from “Text Mail Clients” BOF @ LinuxDays Prague 2012

October 21st, 2012 No comments

I promised to post some minutes from the BOF in $SUBJ here for people who don’t remember all the tool names:

  • [l]imit in mutt is very powerful functionality; my other blogpost describes notmuch integration to that
  • new mutt-kz has good virtual folder support as notmuch integration; perhaps future of state-of-art text mail clients
  • sup is interesting gmail-like text client, but way too slow!
  • alot is worth a look as nice notmuch frontend; no screenshots on net though
  • notmuch can filter by folder label, so single db is fine for all your folders
  • dovecot sync (dsync)
  • lookg at images when reading mail remotely – screenenv (set $DISPLAY based on last active screen client), new tool needed for seamless transfer of files back to local machine is needed!
  • maildir sync using VCS (bazaar) instead of imap (read mails using “thick client”) (ccxcz)
  • prioritization of downloaded mails by using uucp for transfer (lmw)
  • sending mails by feeding them to procmail which decides how to send them (ccxcz)
  • automatic addressbook building: lbdb (little brother db)
  • Trojita is Qt MUA with very fast IMAP.
  • another mutt tip: set edit_headers will make mutt not ask about recipient, subject etc. before starting editor, but let you put the headers in instead
Categories: linux Tags: , , , ,

Conversion from mixed UTF8 / legacy encoding data to UTF8

September 23rd, 2012 1 comment

For about 13 years now, I’m running the Muaddib IRC bot that serves a range of Czech channels. Its features varied historically, but the main one is providing conversational AI services (it learns from people talking to him and replies back based on the learnt stuff). It runs the Megahal Markov chain algorithm, using the Hailo implementation right now.

Sometimes, I need to reset its brain. Most commonly when the server happens to hit a disk full situation, something no Megahal implementation seems to be able to deal with gracefully. :-) (Hailo is SQLite-based.) Thankfully, it’s a simple sed job with all the IRC logs archived. However, Muaddib always had trouble with non-ASCII data, mixing a variety of encodings and liking to produce a gibberish result.

So, historically, people used to talk to Muaddib using ISO-8859-2 and UTF8 encodings and now I had mixed ISO-8859-2/UTF8 lines and I wanted to convert them all to UTF8. Curiously, I have not been able to quickly Google out a solution and had to hack together my own (and, well, dealing with Unicod ein Perl is never something that goes quickly). For the benefit of fellow Google wanderers, here is my take:

perl -MEncode -ple 'BEGIN { binmode STDOUT, ":utf8"; }
  $_ = decode("UTF-8", $_, sub { decode("iso-8859-2", chr(shift)) });'

It relies on the Encode::decode() ability to specify a custom conversion failure handler (and the fact that Latin2 character sequences that are also valid UTF-8 sequences are fairly rare). Note that Encode 2.35 (found in Debian squeeze) is broken and while it documents this feature, it doesn’t work. Encode 2.42_01 in Debian wheezy or latest CPAN version (use perl -MCPAN -e 'install Encode' to upgrade) works fine.