### Archive

Archive for the ‘software’ Category

## Playing MP3 on Raspberry Pi with low latency

One commercial project I was working on for Raspberry Pi involved playing various MP3 samples when a button is pushed. The original implementation used mplayer to play back the samples, however the issue is that there was up to 1500ms latency between mplayer was executed and start of playback.

I didn’t do detailed profiling, but I think two factors causing high latency of mplayer were that (i) just loading all the .so libraries mplayer depends on can take many hundreds of milliseconds (ii) the file is being scanned for whatever stuff, streams detected etc. and that can also take some extra time; perhaps I could force mplayer to realize this is a simple MP3 file, but (i) is still the much bigger factor.

I wanted to avoid recoding all the samples to wav. That would allow me to use aplay directly and the playback starts immediately, but it would also feel really silly; decoding of MP3 is not the bottleneck, just the latency of mammoth software loading and initializing itself is. I also didn’t try mpd as that might have been a bit painful to set up.

Another point worth noting is that I didn’t use the crappy on-board PWM audio but a $3 chinese USB soundcard (which is still much better than PWM audio). And using reasonably up-to-date Raspbian Wheezy. So I tried… • mplayer -slave -idle, started in parallel with my program and receiving commands via FIFO. It hangs after the first file (even though it works fine when ran without -slave). • cmus running in parallel with my program, controlled by cmus-remote. Convincing it to use ALSA device of my choice was really hard, but eventually I managed, only to hear my files sped up about 20x. • madplay I couldn’t convince about using a non-default ALSA device at all. • mpg123 started immediately and could play back the MP3 files on a non-default ALSA device. Somehow, the quality was very low though (telephone grade) and there was an intense high-pitched clip at the end of the playback. • mpg321 I couldn’t convince to produce any sound and anyway it had about 800ms latency before playback started, probably due to its libao dependency. • sox, or rather AUDIODEV=hw:1 play worked! (After installing a package with MP3 support for sox.) No latency, normal quality, no clips, no hangs. Whew. Verdict: There still is a software on Linux that can properly and quickly play MP3 files on Raspberry Pi, though it was a challenge to find it. I didn’t think of sox at first and I was almost giving up hope. BTW, normally you would use sox and play for applying a variety of audio transformations and effects in a batch/pipeline fashion and it can do a lot of awesome magic. Categories: Tags: ## Conversion from mixed UTF8 / legacy encoding data to UTF8 September 23rd, 2012 No comments For about 13 years now, I’m running the Muaddib IRC bot that serves a range of Czech channels. Its features varied historically, but the main one is providing conversational AI services (it learns from people talking to him and replies back based on the learnt stuff). It runs the Megahal Markov chain algorithm, using the Hailo implementation right now. Sometimes, I need to reset its brain. Most commonly when the server happens to hit a disk full situation, something no Megahal implementation seems to be able to deal with gracefully. :-) (Hailo is SQLite-based.) Thankfully, it’s a simple sed job with all the IRC logs archived. However, Muaddib always had trouble with non-ASCII data, mixing a variety of encodings and liking to produce a gibberish result. So, historically, people used to talk to Muaddib using ISO-8859-2 and UTF8 encodings and now I had mixed ISO-8859-2/UTF8 lines and I wanted to convert them all to UTF8. Curiously, I have not been able to quickly Google out a solution and had to hack together my own (and, well, dealing with Unicod ein Perl is never something that goes quickly). For the benefit of fellow Google wanderers, here is my take: perl -MEncode -ple 'BEGIN { binmode STDOUT, ":utf8"; }$_ = decode("UTF-8", $_, sub { decode("iso-8859-2", chr(shift)) });'  It relies on the Encode::decode() ability to specify a custom conversion failure handler (and the fact that Latin2 character sequences that are also valid UTF-8 sequences are fairly rare). Note that Encode 2.35 (found in Debian squeeze) is broken and while it documents this feature, it doesn’t work. Encode 2.42_01 in Debian wheezy or latest CPAN version (use perl -MCPAN -e 'install Encode' to upgrade) works fine. Categories: Tags: ## On Android and CyanogenMod September 8th, 2012 5 comments On Wednesday, I have bought myself an Android phone, as my good old S-E C510 suffered from worse and worse charging problems. I have found that I find it pretty much impossible to type on a touchscreen and did not see any improvement even after light practice (on a spare second hand Android phone I acquired just for its sensor – sometime in the future, maybe it will drive a quadcopter). So, I went for Sony-Ericsson Xperia Pro (codename iyukan) with its hardware keyboard. It’s a pretty neat phone, my only complaint is a difficult-to-press power button. However, just after turning it on for the first time, the phone prompted me to upgrade it from Android 2.3 to Android 4. The fool I was, thinking that newer is better and wanting to summarily get rid of all the preloadware apps… And since a friend told me that CyanogenMod works all right on this phone, I would need a Windows PC to upgrade to Android 4 the Sony Ericsson way, I like to have full control over the systems I use and I like CM’s tray design ;-), I went for it. First, some tips and tricks for fellow Googlers that come by this post in a need to get CyanogenMod working on their Xperia Pro: • Do not expect CyanogenMod wiki to be a place to document even critical issues, learn about them and solving them. Your only shot is hitting the issue blindly and then followup wild googling and IRC. More on that below. • Ignore stock CyanogenMod. What you want is using CyanogenMod fork FreeXperia (FXP) which contains CM tuned for Xperia phones, with both custom kernel and set of drivers and applications. Follow the regular CyanogenMod flashing howto, just use .zip files provided by FreeXperia. The latest CM9.1 Xperia version FXP136 worked quite well for me, aside of wifi troubles (more on that below), camera autofocus on touching the cmaera button and maybe some compass weirdness (I didn’t verify that yet, but there are workarounds in the tracker in case it proves to be a real issue). • If you insist on stock CyanogenMod 9.0.0-rc2, replace the boot.img you will be flashing (kernel image) with the one from FXP136, or your phone will essentially refuse to start up with applications like the Setup Wizard crashing and if you manage to get past that, the phone being quite sluggish. • New FXP WiFi drivers for Xperia Pro (wl12xx, specifically wl1271) have support for some extended powersaving features that depend on RX streaming. On some APs, that means the device will receive packets only up to 100ms after it transmits packets itself – any packets coming after that will be lost, which means that communication with sites that take a little to process your requests (e.g. the Market) or using any kind of streaming breaks. I have spent the whole last night fiddling with wifi and binary patching wl12xx.ko to tweak the parameters, but I just didn’t manage to get it working with my Wifi AP. However, over the night I have tilted to thinking that this is slightly more likely bug in powersaving support of my AP rather than in the wifi firmware, which is simply using more aggressive powersaving modes now than other Android phones visiting my home wifi network before and other devices like notebooks. I have pretty much given up on debugging this now and will just buy a new AP, since with all other APs I have came by so far the phone works fine (but there are scattered reports about this problem on the net). • My phone refuses to properly authenticate with my AP (always stuck in the “Obtaining IP address stage”, but in fact it never comes to DHCP, instead it fails right after authentication), wpa-supplicant logs WPA: EAPOL-Key Replay Counter did not increase - dropping packet and that’s it. After I restart my AP, the authentication succeeds… once; if I disconnect, I won’t connect again anymore. Again, this happens just with my AP, so maybe there is some connection to the previous problem, perhaps some authentication packets being dropped… This happens with WEP, WPA-PSK TKIP or AES, … The only workaround I have found is to restart the AP. • Before, my phone would get stuck in a different way, believing that its rightful IP address is 169.254.222.something and never asking using DHCP for an actual IP address. The solution to that problem is to open a terminal, su, and rm /data/misc/dhcp/*.leases. Also, don’t panic if you are to connect to eduroam; even though the WiFi authentication dialog will show phase 2 to be “None”, that does not mean wpa_supplicant on the phone is not internally using MSCHAP. :-) So, in the end, the phone has eaten much of my last three days, and it was not spent installing and fiddling with neat apps but debugging some frustrating issues. I hope it will serve me better from now on… :-) But this has been also an interesting lesson in dysfunctioning open source projects – yes, I mean CyanogenMod and FreeXperia. First of all. The problem is that the projects are very unfriendly to their audience. Sure, CyanogenMod has a pretty front website and after some very non-straightforward you may even reach a straightforward HOWTO for your phone that you may follow to do the installation, but the project becomes unfriendly once you need to do some powerusery things with your phone or even start taking look at the source and doing some development. First, I should take a note that some of the issues are probably FreeXperia specific. Let’s take a look at some of the problems: • Bad overview documentation. I found no way to actually learn on my own about FreeXperia and its relationship to CyanogenMod (which is still not completely clear to me). Even long after first hints to “use FXP136″ or whatever, I was clueless about what the “FXP” actually meant. • Bad release documentation. On Xperia Pro, the latest CyanogenMod official is 9.0.0-RC2. There appears to be absolutely no way to learn about what kind of state is it in – what blocker bugs are there to keep this at RC2? Is it worth waiting for 9.0.0? It appears to be all just in the minds of the maintainers so the only way to decide which version of CM to pick is to waste time trying to install it. Also, FreeXperia homepage caries essentially no documentation either, not even linking a fairly essential companion forum thread. • Bad detailed documentation. It appears that the only way to learn about issues and try to solve them is either asking on the forum and navigating its unwieldy paginated threads, or asking on IRC and hoping someone knowledgeable is by accident following the channel at that moment. There is a Wiki but most attempts to document issues and help out fellow users or simply correct factual errors appear to be reverted without explanation. • Bad development documentation. Xperia Pro is actually a huge exception here since there is an actual HOWTO on compiling CyanogenMod for it using the arcane build system. However, trying to navigate the masses of github repositories of both CyanogenMod and FreeXperia and understanding how they relate, in which repository and in which branch can I actually find the kernel I’m running and where does my wl21xx module come from has taken me several hours anyway. While FreeXperia is supposed to be an “open source” project, there is actualy no word on its homepage about where to get the sources and how are they built; you are on your own in the GitHub maze (and no, there is no link to GitHub’s FreeXperia account on its homepage either). I’d say this is on the verge of violating GPL, though probably not quite behind the line yet… • Less than ideal developer attitude. The people at IRC are mostly very helpful and I thank them again for all their help. But I have been rather discouraged by my wiki experience and why should I even bother reporting bugs? I complained a bit about some of these issues in the past few days. A fellow IRC user asked “would you rather developers spend their time on documentation than fixing bugs?”. I think resounding “YES” is in order. Most basic documentation (what is what) does not take long to write and goes a long way. Also, putting effort to fixing bugs is usually no excuse for bad attitude to users. It seems to me that to be a happy CyanogenMod user, you either do not actually put much effort in poking the system and you are lucky to have a most mainstream device with all the major issues ironed out, or you go all the way to become a core developer and learn about all the details. If you are stuck somewhere in-between, willing to get to the bones of the system to solve your problem but just wanting to solve your problem, CyanogenMod/FreeXperia gives you no choice but to spend days learning about all the ways things work and are getting done. Given that, it is actually surprising to me that it still works as well as it does. It is an interesting case study in open source dynamics. I think it will be interesting to see whether FreeXperia can survive for long time as the original developers, who don’t work in a much open environment, wear out and enthusiasm of newcoming fresh developers will be required… Let’s watch and learn! Categories: software Tags: ## hed – fast hexadecimal editor, now packaged for Debian July 1st, 2012 2 comments Few years ago, as a school project I have written hed. It’s yet another terminal hexadecimal editor, but with few unique features. Thanks to its splay tree file representation, it is able to very efficiently handle editing and even inserting to huge files; the file is not loaded in memory as a whole, just the modified parts are saved, and therefore you are able to edit even files many gigabytes in size efficiently. You can also save just the swap file separately as a “working diff” and restore your changes later on top of unmodified original file. It uses vi-like keybindings (including marks and yank/paste registers or :!). It also features an “expression” concept that lets you efficiently compose search, substitute or jump expressions composed from a variety of data representations, supporting arithmetic operators and register references. E.g. using special register “. (data under cursor), you can use command #”. to jump to file offset written under cursor. I’m writing about it again now since I just pushed out Debian packaging for the editor, so you can easily make Debian or Ubuntu packages for yourself from the source (it also has existing OpenSUSE packaging). Try it out! I’m not maintaining the project anymore, but Petr Tesarik will gladly accept any patches or feedback (or I will too, forwarding it to him :-). Categories: software Tags: ## Perl and UTF8 June 24th, 2012 3 comments I love Perl and it’s my language of choice for much of the software I write (between shell at one extreme and C at the other). However, there is one thing Perl really sucks at – Unicode and UTF8 encoding support. It is not that the features aren’t there, but that getting it to work is so tricky. It is so much tricks to remember already that I started writing them down: http://brmlab.cz/user/pasky/perl-utf8 It’s a wiki, anyone is welcome to contribute. :-) Categories: Tags: ## Using CUPS to print text files in non-UTF8 charset encoding May 17th, 2012 3 comments At our university department, many people still haven’t migrated to UTF8 and are still happily using ISO-8859-2 – mainly due to the amount of legacy text (TeX, …) documents. Nowadays, support for non-UTF8 is slowly waning though, and CUPS is a prime example. Most of (shabby anyway) support for non-UTF8 encodings have been removed few years ago. It is still possible to force CUPS to print text files in non-UTF8 encoding if you extract the appropriate files from ancient version (1.2 or some-such) of CUPS to /usr/share/cups/charset/ and print using e.g. lpr -o document-format='text/plain;charset=iso-8859-2'. However, there is simply no support for lpr automatically setting the charset based on your locale. We decided that the best way to go is to simply auto-detect the encoding using the awesome enca package and convert text files from this encoding to UTF8. This should be actually fairly fool-proof in practice, unless you are dealing with an extremely mixed set of languages. Making own CUPS filter is easy – just change texttops entries in /etc/cups/mime.conv to textautoencps and create a new /usr/lib/cups/filter/textautoencps file: #!/bin/bash if [$# == 0 ]; then echo >&2 "ERROR: $0 job-id user title copies options [file]" exit 1 fi { if [$# -ge 6 ]; then cat $6 else cat fi; } | enconv -x utf-8 -L czech | /usr/lib/cups/filter/texttops "${@:0:6}"
Categories: Tags:

## Publicly Killable Computations

March 7th, 2012 1 comment

At our university department, people sometimes need to run expensive or long-term computations. We have few servers reserved for computations, but frequently it is useful to run computations on machines in the offices since some of them are fairly powerful and mostly get only very light use CPU-wise.

However, such computations must never impair any interactive or more pressing use of the machine. Therefore, we want to limit scheduling priority of the computations, limit total memory used by the computations and allow *anyone* kill *any* running computation. It turns out that this is not as trivial to achieve as I hoped.

In comes Computations under control: compctl – cgroup-based control of publicly limitable and stopable tasks. It is a tool that allows anyone to execute a command (or start screen) such that it is marked as a computation. Then, it allows anyone else to limit the total amount of memory allocated for all computations and to stop a specific computation or all computations on a machine. It uses cgroups to keep track of computations and limit the total memory usage, and a simple client-server architecture to perform priviledged tasks.

I hope it will be useful for someone else too. :-) Feel free to send in patches, and extra pairs of eyeballs checking the security would be welcome too. Top on my TODO list is simple debian package and a more verbose compctl –list output.

Categories: Tags:

## Full-text search in mutt: alternative notmuch integration

If some feature is too slow, you end up conciously avoiding it and losing productivity. This is one of the reasons that we emphasize so much that Git is as fast as it is – you end up using it more because of that. One thing I always found very frustrating was full-text search in mutt; it takes _minutes_ on my mailbox and I end up trying many different header-based queries instead in order to find the mail. But today, I have finally set up notmuch, a very nice and fast mail indexer.

Unfortunately, there was no satisfying way of integrating notmuch with mutt! There is a notmuch-mutt script which creates a temporary maildir with results and moves me there. This was not going to work for me – you cannot make any changes in the “search results list” like deleting mails (I wonder if status would carry if I reply to mails; I suspect not) and in order to get back to your mails, you need to switch mailbox – which implies that your previous position is not restored and that it’s quite slow (few seconds – too much!).

What I envisioned instead was something like the ‘l’imit function that I use very much, just faster. ;-) It turns out that mutt can match message-ids in the limit query and that notmuch can output a list of message-ids of matched mails. Therefore, the most hackish approach is simply to use notmuch to generate a limit specification and perform that – and it turns out that this is good enough (in my scenario)!

Just put these two bindings (or only the first one) in your .muttrc:

# 'L' performs a notmuch query, showing only the results macro index L "<enter-command>unset wait_key<enter><shell-escape>read -p 'notmuch query: ' x; echo \$x >~/.cache/mutt_terms<enter><limit>~i \"\notmuch search --output=messages \$(cat ~/.cache/mutt_terms) | head -n 600 | perl -le '@a=<>;chomp@a;s/\^id:// for@a;$,=\"|\";print@a'\\"<enter>" "show only messages matching a notmuch pattern" # 'a' shows all messages again (supersedes default <alias> binding) macro index a "<limit>all\n" "show all messages (undo limit)" Perhaps sometime in the future, we will get native libnotmuch support in mutt, but I think this is a pretty good substitute for now. :-) ## TODO list • the way this snippet prompts using a temporary file is completely absurd; mutt needs to get a builtin prompt function for its macros • only the most recent 600 search hits are shown, since… • …the filtering is grossly inefficient; it is still very fast on my computer, but if mutt could just directly get a list of message ids and match them, things would be much nicer than me abusing the regex matching machinery • the 600 search hits limit is global over all folders, therefore if you have a lot of mails and a lot of folders, searching for a common word may hide even some recent results • notmuch cannot search for substrings, apparently, only whole words • notmuch does not deal with diacritics and other locale transliteration character classes Categories: software Tags: ## Realtime Signal Analysis in Perl September 24th, 2011 1 comment About a month ago, we were working on the Fluffy Ball project – a computer input device that can react to fondling and punching. Thanks to a nice idea on the brmlab mailing list, we use a microphone and process the noise coming from the ball’s scratchy stuffing and an embedded jingle. The sounds from the outside are almost entirely dampened by the stuffing and for a human, the noise of fondling and punching is easily distinguishable. Frequency spectrum, for our purposes, is just an array indexed by frequency, storing the amplitude of each frequency (in some range). A common variation is the power spectrum that describes the power of each frequency, i.e. the amplitude squared. Frequency spectrum is obtained by splitting the input signal to fixed-size samples and performing Discrete-Time Fourier Transform. It turns out that trivial spectrum-based rules can be used to achieve reasonably high detection accuracy for a computer too (especially when the user is allowed to “train” her input based on feedback); I had big plans to use ANN and all the nifty things I have learned in our AI classes, but it turned out to be simply an overkill. The input signal is transformed to a frequency spectrum (see box) using real discrete FFT. So, we have the audio signal coming in from a regular mic device and need to process it further. I chose Perl for quick prototyping and I have assumed that I would find some pre-made scaffolding for this ready. But it turns out that noone really published a simple example of even just showing a real-time frequency spectrum. So, here you go! :-) First, we need some reasonable way to continuously display the spectrum. Most GUI paradigms are event-driven, but events are usually user interaction pieces and while it would be possible to incorporate continuous data-based updates in this model, it feels quite backwards. So we use a trick: use warnings; use strict; init_dsp(); init_fft(); use Tk; our$mw = MainWindow->new; $mw->after(1, \&ticks); # after 1ms, give control back MainLoop; sub ticks { while (1) { render_signal(process_signal(read_dsp()));$mw->idletasks(); } }

This circumvents the event-driven architecture of Tk and instead puts our main loop in control, processing any GUI events when it’s good time. For more complex programs, this is a bad idea and it will lead to poorly maintaineable code, but when writing simple tools, you should not succumb to grand frameworks and let your code overgrow you.

Okay, how to grab audio input signal in Perl? Unfortunately, there are not really any handy modules you could use thoughtlessly. Audio::DSP is a possibility, but using it is clumsy, especially in the current world of ALSA as you have to rely on the imperfect aoss wrapper. A simple alternative is to get the raw byte data through a pipeline from the ALSA arecord tool:

our ($devname,$fmt, $bitrate,$wps, $bps,$bufsize, $dsp); BEGIN {$devname = "default"; # or e.g. hw:1,0 for an additional USB soundcard input $fmt = 16; # sample format (bits per sample)$bitrate = 16384; # sample rate (number of samples per second) $wps = 8; # FFT windows per second (rate of FFT updates)$bps = ($fmt *$bitrate) / 8; # bytes per second $bufsize =$bps / $wps; # window buffer size in bytes } sub init_dsp { open ($dsp, '-|', 'arecord', '-D', $devname, '-t', 'raw', '-r',$bitrate, '-f', 'S'.$fmt) or die "arecord:$!"; use IO::Handle; $dsp->autoflush(1); } sub read_dsp { my$w; read $dsp,$w, $bufsize or die "read:$!"; return $w; } read_dsp will return one signal window per call, the window being a binary blob consisting of one two-byte word per sample. We want to magically convert this to a spectrogram. Audio::Analyze is again the simple way to get a signal spectrum. If you are after analyzing a pure audio signal, you probably want to use it since it can easily filter the signal based on relative human perception of frequencies etc. But for us, it is inconvenient to feed it data through a pipe and we will directly use Math::FFT. It will still handle all the gory math for our case (and we care about the actual noise, not the way people would hear it). use Math::FFT; use List::Util qw(sum); our @freqs; sub init_fft { my$dft_size = $bitrate /$wps; for (my $i = 0;$i < $dft_size / 2;$i++) { $freqs[$i] = $i /$dft_size * $bitrate; } } sub process_signal { my ($bytes) = @_;   # Convert raw bytes to a list of numerical values. $fmt == 16 or die "unsupported$fmt bits per sample\n"; my @samples; while (length($bytes) > 0) { my$sample = unpack('s<', substr($bytes, 0, 2, '')); push(@samples,$sample); }   # Perform RDFT my $fft = Math::FFT->new(\@samples); my$coeff = $fft->rdft; # The output are complex numbers describing the exactly phased # sin/cos waves. By taking an abs value of the complex numbers, # we just measure the amplitude of a wave for each frequency. my @mag;$mag[0] = sqrt($coeff->[0]**2); for (my$k = 1; $k < @$coeff / 2; $k++) {$mag[$k] = sqrt(($coeff->[$k * 2] ** 2) + ($coeff->[$k * 2 + 1] ** 2)); } # Rescale to 0..1. Many fancy strategies are possible, this is # extremely silly. my$avgmag = sum (@mag) / @mag; @mag = map { $_ /$avgmag * 0.3 } @mag; return @mag; }

Not much to add besides the inline comments. The input of the process_signal function is a raw byte stream, the output is a list of amplitudes; @freqs maps the list indices to the actual Hz frequencies. The normalization to [0,1] interval shown here (pitching the mean at 0.3) is extremely naive, again there are many possible strategies. Also, you certainly want to use a window function etc. in more serious applications.

Now, for the visualization. We have chosen Tk for our GUI (it looks ugly, but it is reasonably easy to use despite its Tcl antics). We will use its Canvas object where we can draw freely, and just plot a line for each frequency:

our $canvas; sub render_signal { # Display parameters, tweak to taste: my$rows = 2; my $hspace = 20; my$height = 150; my $vspace = 20; my @spectrum = @_; my$row_freqn = @spectrum / $rows;$canvas; unless ($canvas) {$canvas = $mw->Canvas( -width =>$row_freqn + $hspace * 2, -height =>$height * $rows +$vspace * ($rows + 1));$canvas->pack; } $canvas->delete('all'); for my$y (0..($rows-1)) { for my$x (0..($row_freqn-1)) { my$hb = ($height +$vspace) * ($y + 1); my$i = $row_freqn *$y + $x; # Draw line: my$ampl = $spectrum[$i]; $ampl <= 1.0 or$ampl = 1.0; my $bar =$height * $ampl;$canvas->createLine($x +$hspace, $hb,$x + $hspace,$hb - $bar); # Draw label: if (!($x % ($row_freqn/4))) {$canvas->createLine($x +$hspace, $hb + 0,$x + $hspace,$hb + 5, -fill => 'blue'); $canvas->createText($x + $hspace,$hb + 15, -fill => 'blue', -font => 'small', -text => $freqs[$i]); } } }   \$mw->update(); }

This suffices for a naive visualization, you can easily tweak it to do thresholding and whatever else you desire. I have found that on some of my computers, the X protocol is pushed to its limits by repeatedly drawing a large amount of lines, and sometimes the spectrum will start to lag behind the signal; either show wider bars averaging together multiple frequencies, or use something other than a Canvas object – raw pixmap transfer would likely be better than such a large amount of line drawing operations.

For a serious signal analysis work, you will also want a spectrogram – a time-based plot of amplitude of various frequencies.

To get a working script skeleton, simply piece the code snippets together (fb-simple.pl). See fb.pl for the real fluffy ball script. It is much uglier, but it maintains sample averages over longer time windows (essential for more complex signal analysis), it has simple sample recording capabilities, and an example of naive threshold-based classifier.

Categories: software Tags:

## Master Thesis: Information Sharing in MCTS

August 17th, 2011 1 comment

Just a quick note – at the beginning of August, I have submitted my master thesis, titled Information Sharing in MCTS. MCTS means Monte Carlo Tree Search and it is a powerful technique for finding moves in games with large state space and difficult evaluation functions, such as Go.

The thesis presents the modern Computer Go techniques and open problems, my (not too weak) Go-playing program Pachi and some modest improvements of the MCTS algorithms. It might make a good introduction to state-of-art Computer Go.

Categories: Tags: