Saturday, February 9, 2008

Create File of a Given Size

Last week, I needed to test the speed of my VPN connection. My plan was to create a file of some given size (say 10M), and test copy it to another server across the VPN tunnel.

My first task was to create a file of size 10M. On Solaris, it can be done simply by this command:
$ mkfile 10m output.dat 


On Linux, you can use the dd command:

$ dd if=/dev/zero of=output.dat  bs=1024  count=10240
10240+0 records in
10240+0 records out
10485760 bytes (10 MB) copied, 0.218581 seconds, 48.0 MB/s
$ ls -hl output.dat
-rw-r--r-- 1 peter peter 10M 2008-02-09 16:21 output.dat


The above dd command creates a zero-filled file named output.dat consisting of a count of 10240 blocks, each of block size 1024.

An anonymous commenter pointed out that you can also create a 10M file like this:
$ dd if=/dev/zero of=output.dat  bs=1M  count=10


Now that the 10m file has been created, I can time the copying of the 10m file like this:

$ time scp path/to/10m.dat  user@192.168.99.10:/some/location


If you can suggest other ways to create an arbitrary sized file, please share with us via comments.

P.S. Articles from this blog on the dd command:
Show progress during dd copy

14 comments:

Anonymous said...

You can also use K / M / G as extension for bs.
dd if=/dev/zero of=output.dat bs=1M count=14

would create a 14 MB file.

Peter Leung said...

Thanks!

Anonymous said...

This is useful for "zeroing" out all the blank space on a hard drive (preventing recovery of deleted files) without touching the existing files.

You can also use if=/dev/urandom to fill the file with random data instead of zeros.

Miloss said...

Anyone knows, how to keep disk's content instead (this unnecessarily decreases lifetime of my SD card)

more info: I want to create filesystem in one file (no care about content, so why must I overwrite original entropy?)

Sandra Bies said...
This comment has been removed by the author.
Sandra Bies said...

@ MMlosh:

you can use seek for that:

dd of=yourfile bs=1 count=0 seek=8G

This creates a file of exactly 8 GB in hardly any time at all.

David said...

I like this last option. The use of seek instead of count has the advantage that you are not really using the specified file size disk space.

Anonymous said...

only problem with the seek method is that it doesn't show as taking up space when running df for a volume.

Im trying to test the removal of large old files from a script and it would have been ideal but sadly no.

I have only found the /dev/zero to block method to work but this takes some time to create the files.

strangely this is well possible using the fsutil command line command in windows. Im sure there is a way to create quick large test files that work with a df under linux. please post if you know how.

Anonymous said...

fallocate -l 10m file
Creates a 10MiB file without having to write anything to the file AND it isn't a sparse file. My distribution doesn't have mkfile and this is the closest thing I've seen.

DigitalPioneer said...

The fallocate solution looks like the "right solution". Using dd works, but it's bloodly slow. I also came up with this one of my own: If I need a file of 1173363845 bytes, I can:
$ touch file
$ truncate -s 1173363845 file
Voila, done. And very fast. dd proved too slow for files of any significant size.

Bram said...

On Linux you can use the truncate command, it uses the filesystem allocate so it doesn't write out all the bytes like dd would: much much faster.

Unknown said...

If the goal is to create a ballast file. The fallocate is the best solution. df will show that available space has been reduced. The command truncate does not reduce the available disk space.

yashpogra1 said...

how to create a permanent fixed size folder

yashpogra1 said...

i have a ubuntu 12.4 os and i want to fixed my folder size permanently...........