[ExtractStream] Random SplitStream Thoughts...

Roger Merchberger zmerch7 at y...
Sat, 9 Feb 2002 07:41:58 -0800 (PST)


--- Warren Toomey <wkt@t...> wrote:
> [ I just wrote this reply to Roger, and then I ralised that I had
> assumed
> that Roger was working on a Unix/Linux system. If that's not the
> case,
> if he's working on a Windoze system, then everything I've said
> may be 
> untrue - Warren ]

Yes, I'm currently working on Win2k - where I know that the buffering
for it is not nearly as efficient as *nix...

> In article by Roger Merchberger:
> > 1) I think that splitstream can be made to work faster by
> allocating
> > a lot more memory & increasing the file input/output buffers -
> right
> 
> If the program is bottlenecked by I/O then, no, getting it to
> allocate
> more memory won't make it run any faster.

Correct, but with windows trying to open 128K on one part of the
disk, then save it on 2 different parts of the disk I'm hoping to
reduce some of that I/O bottleneck if it can read 12.8M from one
place, than just have the hard drive chatter away at 2 files with a
larger chunk, instead of three. Those with slower IDE drives would
see a larger speed increase - anyone lucky enough to have 3 RAIDed
15000RPM drives it wouldn't matter nearly so much to... ;-)

Allz I know is when I run splitstream on a downloaded stream, my hard
drive goes nutz while my CPU's twiddle their thumbs -- I'm trying to
see if there's a way to reduce the I/O bottleneck by reading more
into memory than 1 chunk.

> Just looking through the code right now, I can see it uses fwrite()
> to write out the data, and fread() to read chunks in. They're
> pretty
> efficient, and I doubt that you would improve matters by buffering
> any
> more. You might be able to improve things a little bit with the use
> of setvbuf(), but I wouldn't expect much.
> 
> [ Just tried it, I got a 5% improvement by using a CHUNK_SIZE
> buffer and doing setvbuf(in_fp, mybuf, _IOFBF, sizeof(mybuf)); ]

Wooooshhhhhh... man, that was a Mach 2 Flyby... 
Maybe someday I could actually understand that... ;-)
Yea... when there's an "advanced C programming reference library for
dummies..." ;-)

> In fact, it you buffer more chunks than you can fit into physical
> memory, the process will start thrashing the virtual memory
> subsystem,
> and this will _really_ slow the program down.

This much I know (and hinted at in my last post) - loading stuff from
the hard drive to stuff in swap... it's just wrong. :-) { But dammit,
Winders is so *good* at it...}

I've also checked, tho, and it looks like command-prompt-based
programs (compiled with Cygwin, anyway) can't allocate more than
256Meg of RAM in Win2k - I ran my showmem.exe program at work (384Meg
RAM, 192Meg swap, Celeron 400/OC450) and at home (512Meg RAM, 2Meg
swap, Dual Athlon MP 1600+) both running Win2k, and the numbers were
nearly identical.

> However, there might be one way of improving I/O performance, and
> that
> is to use memory mapping. I'm a BSD person and I would recommend
> mmap()
> and friends here. On SysV and relatives, something like shmat() and
> friends
> could be used. I'm not a Linux person, but just looking at a Debian
> system
> I can see something called memp_open().

Woooooshhh, take 2. ;-)

> > 2a) The first is very minor: the 128K memory allocated for the
> stream
> > is never freed in the program, as far as I can see. Granted, as
> soon
> > as the program exits, it'll be returned, but if the program
> abends
> > the memory is lost until reboot.
> 
> No, this isn't true. Any Unix process will have all of its
> resources
> freed by the operating system, regardless of how the process
> terminated.
> So, this 128K buffer will always be reclaimed by the system.

But in most every Winders I've seen, memory not free(ed) is gone
forever... Ever run Wordperfect for Windows? There's a memory leak in
it - start it, exit it normally, repeat 10 times, and watch your
Windows 9x/ME crash. Win2K might be different WRT memory management,
but I don't like taking that chance... ;-) And not everyone runs
WinNT/2K, and XP still scares me...

[snip]

> It would also be a good idea to find out if Splitstream is more I/O
> intensive or more CPU intensive.

I/O, by far...

> If you open another window and run top
> before startign splitstream, you should see the CPU pretty idle.
> Then
> start splitstream, if the CPU usage goes to 0.0% idle, then
> splitstream
> must be CPU intensive, and you need to tune the code. If the CPU
> does
> not go to 0.0% idle, then splitstream is not CPU intensive, and
> needs
> to do its I/O faster to work.

Which I'm hoping buffering multiple chunks in memory will help, to
have Winders open(), read a bunch, then bounce between 2 files for
output, versus bouncing between 3 files for input & output. In
dealing with such large files, the I/O latency time of the heads
skittering all over the place must be murder... especially on a
platform where file fragmentation is so... common. (Another advantage
of *nix - fragmentation is nearly nonexistant...)

> [ I just tried splitstream out on my 266Mhz Pentium, and yes
> it doesn't need CPU, it needs faster I/O ]
> 
> Suggestion if it is I/O intensive, make sure the input file and the
> output files are on physically different disks. Then the operating
> system can schedule I/O operations on each drive concurrently, and
> this
> will speed things up.

Most definately - but unforch, most mere mortals in computerdom can't
afford multiple mongo hard drives for their PC's -- we're too busy
trying to afford it for our Tivos!!! ;-)

[[ and for those who don't know, as far as IDE hard drives are
concerned, they *must* be on seperate interfaces for that to work -
if one hard drive is slaved to the other hard drive, it won't help
your thruput much at all because the IDE interface can only talk to 1
spindle at a time. ]]

> Hope some of these suggestions are useful. Good luck with it Roger.

The ones I understood were *super* helpful - a couple went over me
head... such a long road, but like the roads of Basic, 6809 assembly,
perl, cobol, APL, JCL and to a lesser extent Pascal, Lisp, Logo...
one must start learning & keep using it enough to get it beat into
this tired old brain of mine... (why is 6809 index register
redirection & writing 8-bit position independant code so easy for me,
but C pointers so frelling hard???)

Laterz,
Roger "Merch" Merchberger

__________________________________________________
Do You Yahoo!?
Send FREE Valentine eCards with Yahoo! Greetings!
http://greetings.yahoo.com