[ExtractStream] more than 2 gigs extracted = not possible on linux?

Steven Webb steve at b...
Mon, 27 Aug 2001 14:47:34 -0600 (MDT)


> > It's not a kernel/OS issue, it's an application/libc issue.
> 
> Actually, it's a kernel/filesystem issue. A 32 bit machine implicitly
> limits the maximum file size under linux, but certain filesystems allow for
> workarounds. I've seen Oracle on reiser with success.

Hmmm. I seem to have read this as well. I thought that the ext2
filesystem was the problem here (as well as the OS and other things). If
I use "touch" I still cannon create a file > 2 Gigs. If I just "cat
/dev/zero > /big/bigfile" it will still fail when the 2 Gig filesize is
reached.

(4) swebb@s... Mon 2:44pm [/d1] % cat /dev/zero > bigfile
cat: write error: File too large
0.400u 32.120s 1:48.50 29.9% 0+0k 0+0io 104pf+0w
(5) swebb@s... Mon 2:46pm [/d1] % ls -la bigfile
-rw-r--r-- 1 swebb rap 2147483647 Aug 27 14:46 bigfile
(6) swebb@s... Mon 2:46pm [/d1] % 

So, my question still stands, is anyone here extracting a single file > 2
Gigs with linux or those of you using linux, just extracting parts of the
stream and then mangling them later or something (or using another
filesystem type that supports > 2 Gigs)?

- Steve

--
EMAIL: (h) steve@b... WEB: http://badcheese.com/~steve
(w) swebb@r...
stevewebb@m...