Re: High throughput disk write: CreateFile/WriteFile?
On Jun 29, 9:04 am, Ulrich Eckhardt <eckha...@satorlaser.com> wrote:
Brandon wrote:
I have an application where I need to write text and binary (numeric)
data to a file at ~200 MB/s (8MB/40ms). I am currently using fprintf
and fwrite, respectively, but I'm not achieving the desired
throughput.
Just wondering, but what throughput did you manage to achieve?
40 MB/s. I'm opening and closing a file every 8 datasets at 8 MB
dataset. This takes approx. 1.6s, so 64 MB/file / 1.6s = 40 MB/s/file
I've been looking into the Windows CreateFile based methods instead.
This usually reduces the overhead a bit, because those are the native APIs
while fopen/fprintf/fwrite are just wrappers around them. However, even the
latter don't have to be slow, it still depends on how and what you are
doing.
char szTextToAppendToLog[1024]; // Temp char buffer
char szFilePath[1024]; // Output file name
char szTimeStamp[32]; // Time character string
char szTemp[128]; // Temp character string
I'm always wondering if hackers think that by using powers of two their
programs will somehow magically work correctly... (SCNR)
Well, I was always taught that it was a good practice to allocate
memory in powers of 2 along byte boundaries.
hWriteFile = CreateFile(
szFilePath, // File path
GENERIC_WRITE, // Open for write
NULL, // Do not share
NULL, // Default security
CREATE_ALWAYS, // Overwrite existing files
FILE_FLAG_WRITE_THROUGH,//FILE_FLAG_OVERLAPPED, // Normal file
NULL); // No template
One thing here: FILE_FLAG_WRITE_THROUGH means that this function will not do
any buffering. If you have short bursts of data to write, this will impact
performance negatively.
Short bursts? Is 8MB/40 ms a short burst? 200 MB/s is pretty ambitious
imo, and not even attainable on a non RAID disk to my knowledge.
if (hWriteFile == INVALID_HANDLE_VALUE)
{
sprintf_s(szTextToAppendToLog,sizeof(szTextToAppendToLog),
"ERROR: Output file %s failed to open.",
szFilePath);
pThis->UI->AppendToStatLog(szTextToAppendToLog);
}
You should throw an exception here, continuing here is just plain wrong.
Indeed, but I'm not such a good programmer and for now I'm not worried
about corner cases, I just want the performance.
// Write ASCII header to file.
sprintf_s(szTemp, sizeof(szTemp), "MyData: Date:%s\r\n",szTimeStamp);
WriteFile(
hWriteFile,
szTemp,
(DWORD) sizeof(szTemp),
lpNumBytesWritten,
NULL);
Apart from the NULL-pointer for the number of written bytes, Here might be a
reason for performance problems. The problem here is that you are writing
short pieces of data but without intermediate buffering. Also, I'd suggest
not doing any C-style casts ("(DWORD) sizeof(szTemp)") because those bear
the danger of hiding errors.
Uli
I'm not having performance problems with CreateFile/WriteFile. It just
doesn't work for me. I'm only having performance limitations with
fopen/fwrite.