[ExtractStream] more than 2 gigs extracted = not possible on linux?

sharkey@a... sharkey at a...
Mon, 27 Aug 2001 17:03:47 -0400


> > > It's not a kernel/OS issue, it's an application/libc issue.
> > 
> > Actually, it's a kernel/filesystem issue. A 32 bit machine implicitly
> > limits the maximum file size under linux, but certain filesystems allow for
> > workarounds. I've seen Oracle on reiser with success.
> 
> Hmmm. I seem to have read this as well. I thought that the ext2
> filesystem was the problem here (as well as the OS and other things). If
> I use "touch" I still cannon create a file > 2 Gigs. If I just "cat
> /dev/zero > /big/bigfile" it will still fail when the 2 Gig filesize is
> reached.
> 
> (4) swebb@s... Mon 2:44pm [/d1] % cat /dev/zero > bigfile
> cat: write error: File too large

Remember, "cat" is an application. At least in my distribution (Debian),
cat has not yet been coded for large file support, but "head" has been.
Go figure.

box% head -c 3000000000 /dev/zero > bigfile
box% ls -la bigfile
-rw-r--r-- 1 sharkey sharkey 3000000000 Aug 27 17:01 bigfile
box% ls -lah bigfile
-rw-r--r-- 1 sharkey sharkey 2.8G Aug 27 17:01 bigfile
box% 

Thats on a standard ext2 partition.

The 2GB limit is purely at the application/libc level.

Eric