Home > linux > Throttling I/O activity on Linux

Throttling I/O activity on Linux

A Bugzilla installation (for ELinks) mostly unattended for many years produced a temporary directory where just the directory _itself_ was about 32M big – there must’ve been quite a few million of files. rm -r was able to deal with it, with a small catch – the machine would quickly become totally unusable, with mp3 playback, the mailer and everything stalling for tens of seconds as the I/O queue would get jammed.

But the strangest thing is that I found no way to prevent that – at least on my ext3 filesystem and 2.6.29! I tried running rm with nice -n 19, of course that didn’t help. Much more surprisingly, ionice -c 3 or ionice -c 2 -n 7 had no effect either – why is that is lost on me… Changing the I/O queue governor from the cfq default didn’t help either. In the end, I had to manually ratelimit the removals to one per 1ms… In the end, I used this snippet to delete the directory:

#define _XOPEN_SOURCE 500
#include <ftw.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#include <unistd.h>

static int
display_info(const char *fpath, const struct stat *sb,
             int tflag, struct FTW *ftwbuf)
{
    puts(fpath);
    unlink(fpath);
    usleep(1000);
    return 0;
}

int
main(int argc, char *argv[])
{   
    int flags = 0;
    
    if (nftw((argc < 2) ? "." : argv[1], display_info, 20, flags) == -1) {
        perror("nftw");
        exit(EXIT_FAILURE);
    }
    exit(EXIT_SUCCESS);
}

Does anyone know if Linux really cannot help you prevent a single process totally jamming the I/O of a machine?

Categories: linux Tags:
  1. May 28th, 2009 at 14:18 | #1

    There is a bug report, but unfortunately, kernel developers need more info to reproduce: http://bugzilla.kernel.org/show_bug.cgi?id=12309

  1. No trackbacks yet.


8 − three =