Archive

Archive for the ‘software’ Category

Keras for Binary Classification

January 13th, 2016 1 comment

So I didn’t get around to seriously (besides running a few examples) play with Keras (a powerful library for building fully-differentiable machine learning models aka neural networks) – until now. And I have been a bit surprised about how tricky it actually was for me to get a simple task running, despite (or maybe because of) all the docs available already.

The thing is, many of the “basic examples” gloss over exactly how the inputs and mainly outputs look like, and that’s important. Especially since for me, the archetypal simplest machine learning problem consists of binary classification, but in Keras the canonical task is categorical classification. Only after fumbling around for a few hours, I have realized this fundamental rift.

The examples (besides LSTM sequence classification) silently assume that you want to classify to categories (e.g. to predict words etc.), not do a binary 1/0 classification. The consequences are that if you naively copy the example MLP at first, before learning to think about it, your model will never learn anything and to add insult to injury, always show the accuracy as 1.0.

So, there are a few important things you need to do to perform binary classification:

  • Pass output_dim=1 to your final Dense layer (this is the obvious one).
  • Use sigmoid activation instead of softmax – obviously, softmax on single output will always normalize whatever comes in to 1.0.
  • Pass class_mode='binary' to model.compile() (this fixes the accuracy display, possibly more; you want to pass show_accuracy=True to model.fit()).

Other lessons learned:

  • For some projects, my approach of first cobbling up an example from existing code and then thinking harder about it works great; for others, not so much…
  • In IPython, do not forget to reinitialize model = Sequential() in some of your cells – a lot of confusion ensues otherwise.
  • Keras is pretty awesome and powerful. Conceptually, I think I like NNBlocks‘ usage philosophy more (regarding how you build the model), but sadly that library is still very early in its inception (I have created a bunch of gh issues).

(Edit: After a few hours, I toned down this post a bit. It wasn’t meant at all to be an attack at Keras, though it might be perceived by someone as such. Just as a word of caution to fellow Keras newbies. And it shouldn’t take much to improve the Keras docs.)

Categories: ailao, software Tags: , , ,

Linked Data Mashups

November 13th, 2015 No comments

I’m still working on YodaQA and there is quite some interest in it in my mailbox. One thing leads to another and our startup Ailao already has a few first customers, we work together on various related semantic NLP / search projects.

In YodaQA, we have a much neater web interface as well as a mobile app as the natural way to interact with a QA system is using your voice. Plus, on a limited domain (movies), we are getting pretty close to crossing the 80% mark for accuracy on simpler questions, entering the “magic zone” where people might start really trusting the system. A few essential blocks for that are still in the pipeline, though.

I’ll try to post a bit more about YodaQA and other work we are doing in the coming weeks / months (as well as some of my hobby projects, of course).

For a course of Jan Šedivý, I prepared a presentation on building apps around the semantic web and linked data. See it here for an intro to the tech, it also includes two silly web mashups that might be inspiring.

Categories: ailao, software Tags: , , , , ,

YodaQA Question Answering

April 27th, 2015 1 comment

I was working on Question Answering last year. Guess what, I’m still on it!

I threw away my first prototype BlanQA and started building a second system, YodaQA. It currently has reasonable performance of answering about a third of trivia questions properly and listing the correct answer in top five candidates for half of the questions – without doing any googling or binging.

A few weeks ago, I published the first paper on YodaQA. With a few fellow scientists, we also re-started the qa-oss Google Group on open source question answering systems.

Today, I finally made a proper homepage for YodaQA and launched a live demo of the system. It’s pretty primitive, but hopefully will serve as a proof of concept.

Categories: ailao, software Tags: , ,

Michi – 15×15 ~6k KGS in 540 lines of Python code

March 25th, 2015 No comments

So what’s the strongest program you can make with minimum effort and code size while keeping maximum clarity? Chess programers were exploring this for long time, e.g. with Sunfish, and that inspired me to try out something similar in Go over a few evening recently:

https://github.com/pasky/michi

Unfortunately, Chess rules are perhaps more complicated for humans, but much easier to play for computers! So the code is longer and more complicated than Sunfish, but hopefully it is still possible to understand it for a Computer Go newbie over a few hours. I will welcome any feedback and/or pull requests.

Contrary to other minimalistic UCT Go players, I wanted to create a program that actually plays reasonably. It can beat many beginners and on 15×15 fares about even with GNUGo; even on 19×19, it can win about 20% of its games with GNUGo on a beefier machine. Based on my observations, the limiting factor is time – Python is sloooow and a faster language with the exact same algorithm should be able to speed this up at least 5x, which should mean at least two ranks level-up. I attempt to leave the code also as my legacy, not sure if I’ll ever get back to Pachi – these parts of a Computer Go program I consider most essential. The biggest code omission wrt. strength is probably lack of 2-liberty semeai reading and more sophisticated self-atari detection.

P.S.: 6k KGS estimate has been based on playtesting against GNUGo over 40-60 games – winrate is about 50% with 4000 playouts/move. Best I can do… But you can connect the program itself to KGS too:

http://www.gokgs.com/gameArchives.jsp?user=michibot

Categories: software Tags: , , , ,

BIPOP-CMA-ES Patch

October 24th, 2014 No comments

In part of my research, I have been heavily involved with building portfolios of optimization algorithms. Optimization algorithms stay at the root of many computational tasks, from designing laser mirror systems to neural network training. We want to find a minimum (or maximum) of some mathematical function, and for some functions it’s easier than for others.

For very many fairly hairy functions, the best state-of-art optimization algorithm is based on genetic algorithms and it’s called CMA-ES. It also has a very nice Python implementation by its original author, Nikolaus Hansen.

CMA-ES is still not as good as it could be on some functions with many local optima, but its performance can be much improved by establishing a restart strategy that will repeatedly restart it with varying population size and parameters. The best performing restart strategy is BIPOP-CMA-ES and unfortunately, it had no Python implementation so far. I took care of that more than a month ago, but since it’s taking some time to get my modifications upstreamed, if anyone would find that useful,

here is a patch for CMA-1.1.02 adding BIPOP restart strategy

Categories: software Tags: , , , ,

speedread – A simple terminal-based open source Spritz-alike.

March 2nd, 2014 No comments

A few hours ago, I have read about Spritz, a new speed-reading app, and I was quite impressed by the idea. I know the underlying concept is not new and I didn’t even try out Spritz itself (they announce their idea but not release the software, huh?), but this was the first time I have heard about it and I really liked it!

So I decided to implement my own terminal version of this idea, acting as a regular command-line filter. Find the new tool speedread at:

https://github.com/pasky/speedread

(Yes, it may not work well for beletry. Or slides. Yes, it may not work well for emails that you want to just skim for keywords. But then there’s the other 80% of text I need to chew through that does not fall in either category. I’ll have to continue trying it out for longer but it might be really useful.)

Meanwhile, I have also learned about OpenSpritz, a web-based implementation of Spritz. Can be a good match for speedread!

Categories: software Tags: , , , , ,

Brmson / BlanQA

January 27th, 2014 No comments

I have recently been dabbling in Natural Language Processing, in particular Question Answering. I have been fascinated by the success of IBM Watson and have gradually came to believe that this technology can serve as a great basis of autonomous agents operating in the complex world of human knowledge. (I later came across Project Aristo – I’m not alone.) This approach, compared to projects like OpenCog that aim to create autonomous agents understanding and operating in the physical world, seems to offer many advantages – but let’s talk about that some other time.

Let’s say we wanted to take a stab on approximating IBM Watson with easily available technology, in “at home” conditions (or rather, “at hackerspace” – I gave this aim a temporary callsign “Project Brmson”). What’s the best we can do?

So I took a look at the current open source question-answering technologies and found – well, just one, and none that would be immediately usable by anyone. I have put together a short survey of the current landscape.

The only OSS framework I found that (i) could be used with not-so-many modifications to produce something functional, and (ii) would be a good base to build a truly good system on, is OAQA / OpenQA. It seems appealing from multiple viewpoints – it builds on the UIMA unstructured data processing platform which is also at the basis of IBM Watson, it originates at CMU which collaborated with IBM in this area; and, well, it’s the only platform that already exists anyway, so it’s a good starting point for someone who has no prior clue about the field. A honorable mention goes to OpenEphyra, basically a non-UIMA OAQA predecessor by the same institution; it’s not a good base to use for new systems, but can be sourced for a lot of NLP functionality.

In my first stab, I looked if there is actually a working QA system built on top of OAQA, and the answer was non-obvious. There is a helloqa project, but its master branch can currently do nothing useful. However, there is also a prototype branch that can actually answer some terrorism-related questions! It doesn’t work out of the box, but our fork does if you follow the instructions. But overally the project seems to be a bit of a hack and not a good base for a universal system usable by anyone but the original author.


So I set out to rewrite the helloqa-prototype from scratch on top of OAQA and build a different, clean and extendable QA pipeline (that shares bits of the original code and is much simpler). Thus, behold the project BlanQA! :-)

BlanQA is focused on universality, practicality and user-friendliness. That means there is a relatively detailed documentation and easy to follow installation instructions (try BlanQA out yourself!). By default, BlanQA offers interactive mode and will answer on top of Project Gutenberg corpus; but you can also connect it to IRC (#brmson @ freenode) or run on top of Wikipedia.

BlanQA is still a very stupid program at this point. It gets the answer right about 10-30% of the time, depending on how nicely you ask. But it’s more important as a base on top of which you can add clever algorithms (the smartest parts of BlanQA are currently outsourced from the OpenEphyra project, mainly guessing the type of the answer – is it a person? location? amount of something?). And if you want an OSS question-answering engine now, BlanQA is where to turn!


I want to develop this further, but the way ahead remains a little unclear. The thing is, OAQA appears to have significant architectural problems, as I realized while I continued hacking BlanQA and learning more about both OAQA and the UIMA framework it builds on top of. The rest of this section is a bit technical, c.f. also a quick intro to BlanQA architecture.

The basic UIMA principle is that each artifact (in this case: question, document/passage, answer) should have its own CAS (“piece of data” with a set of annotations and other featuresets derived from it) with a dedicated type system and appropriate Sofa (view of this piece of data). This would enable easy creation of stand-off annotations of e.g. fetched documents.

However, the OAQA model works with just a single CAS that has just the question text set as a Sofa and then a variety of types mashed together, partitioned only into phase-based views. This seems to me as a substantially less appealing option – it doesn’t allow to use third-party UIMA annotators that expect their subject to be the Sofa, it might be harmful for scaleout and it seems generally awkward to use; I actually have hard time seeing what advantages does using UIMA bring on the table in this model.

So it seems the way forward for BlanQA (or likely a differently-named successor) is to break away of OAQA and build directly on top of UIMA (possibly with a hacked version of uima-ecd that supports multiple CAS, but that seems as a bit intimidating proposition).


Tue Jan 28 2014 update: Note that we have started work on a new Question Answering engine YodaQA built on UIMA from scratch.

Categories: software Tags: , , , , , ,

Weathersonde – Nearby Landing Notification

January 26th, 2014 No comments

At our hackerspace brmlab, one of the things we do is picking up landed weather sondes. In short, fun hardware literally falling off the sky, several times a day, every day. These are stratospheric balloons used for weather data prediction, launched from various sites, that reach the 35km altitude, then the balloon bursts and it lands back on the ground at a random location. At the whole time, it transmits its current GPS coordinates via radio, making this a rather exciting sub-class of geocaching.

As a simple hack today (idea by chido), I created a simple script sonde.sh that is designed to be run three times a day, runs sonde trajectory prediction (a predict.habhub.org service – example) and if the sonde is predicted to land in a certain radius, reports that with a link to the prediction. By default, it is connected to jendabot, one of our brmlab IRC robotic minions, written in an appealingly crazy way as a collection of bash scripts.

Categories: life, software Tags: , , , , , ,

Playing MP3 on Raspberry Pi with low latency

May 11th, 2013 1 comment

One commercial project I was working on for Raspberry Pi involved playing various MP3 samples when a button is pushed. The original implementation used mplayer to play back the samples, however the issue is that there was up to 1500ms latency between mplayer was executed and start of playback.

I didn’t do detailed profiling, but I think two factors causing high latency of mplayer were that (i) just loading all the .so libraries mplayer depends on can take many hundreds of milliseconds (ii) the file is being scanned for whatever stuff, streams detected etc. and that can also take some extra time; perhaps I could force mplayer to realize this is a simple MP3 file, but (i) is still the much bigger factor.

I wanted to avoid recoding all the samples to wav. That would allow me to use aplay directly and the playback starts immediately, but it would also feel really silly; decoding of MP3 is not the bottleneck, just the latency of mammoth software loading and initializing itself is. I also didn’t try mpd as that might have been a bit painful to set up.

Another point worth noting is that I didn’t use the crappy on-board PWM audio but a $3 chinese USB soundcard (which is still much better than PWM audio). And using reasonably up-to-date Raspbian Wheezy. So I tried…

  • mplayer -slave -idle, started in parallel with my program and receiving commands via FIFO. It hangs after the first file (even though it works fine when ran without -slave).
  • cmus running in parallel with my program, controlled by cmus-remote. Convincing it to use ALSA device of my choice was really hard, but eventually I managed, only to hear my files sped up about 20x.
  • madplay I couldn’t convince about using a non-default ALSA device at all.
  • mpg123 started immediately and could play back the MP3 files on a non-default ALSA device. Somehow, the quality was very low though (telephone grade) and there was an intense high-pitched clip at the end of the playback.
  • mpg321 I couldn’t convince to produce any sound and anyway it had about 800ms latency before playback started, probably due to its libao dependency.
  • sox, or rather AUDIODEV=hw:1 play worked! (After installing a package with MP3 support for sox.) No latency, normal quality, no clips, no hangs. Whew.

Verdict: There still is a software on Linux that can properly and quickly play MP3 files on Raspberry Pi, though it was a challenge to find it. I didn’t think of sox at first and I was almost giving up hope. BTW, normally you would use sox and play for applying a variety of audio transformations and effects in a batch/pipeline fashion and it can do a lot of awesome magic.

Categories: linux, software Tags: , , , , , ,

Conversion from mixed UTF8 / legacy encoding data to UTF8

September 23rd, 2012 No comments

For about 13 years now, I’m running the Muaddib IRC bot that serves a range of Czech channels. Its features varied historically, but the main one is providing conversational AI services (it learns from people talking to him and replies back based on the learnt stuff). It runs the Megahal Markov chain algorithm, using the Hailo implementation right now.

Sometimes, I need to reset its brain. Most commonly when the server happens to hit a disk full situation, something no Megahal implementation seems to be able to deal with gracefully. :-) (Hailo is SQLite-based.) Thankfully, it’s a simple sed job with all the IRC logs archived. However, Muaddib always had trouble with non-ASCII data, mixing a variety of encodings and liking to produce a gibberish result.

So, historically, people used to talk to Muaddib using ISO-8859-2 and UTF8 encodings and now I had mixed ISO-8859-2/UTF8 lines and I wanted to convert them all to UTF8. Curiously, I have not been able to quickly Google out a solution and had to hack together my own (and, well, dealing with Unicod ein Perl is never something that goes quickly). For the benefit of fellow Google wanderers, here is my take:

perl -MEncode -ple 'BEGIN { binmode STDOUT, ":utf8"; }
  $_ = decode("UTF-8", $_, sub { decode("iso-8859-2", chr(shift)) });'

It relies on the Encode::decode() ability to specify a custom conversion failure handler (and the fact that Latin2 character sequences that are also valid UTF-8 sequences are fairly rare). Note that Encode 2.35 (found in Debian squeeze) is broken and while it documents this feature, it doesn’t work. Encode 2.42_01 in Debian wheezy or latest CPAN version (use perl -MCPAN -e 'install Encode' to upgrade) works fine.