Bug 7972 – std.file.read allocate a buffer the size of the file even when given a upper limit

Status
RESOLVED
Resolution
FIXED
Severity
normal
Priority
P2
Component
phobos
Product
D
Version
D2
Platform
All
OS
Linux
Creation time
2012-04-23T02:19:00Z
Last change time
2016-03-20T23:40:52Z
Keywords
preapproved
Assigned to
astrothayne
Creator
tbanelwebmin

Comments

Comment #0 by tbanelwebmin — 2012-04-23T02:19:18Z
import std.file; read ("/path/to/bigfile", 1024); a core.exception.OutOfMemoryError is thrown. -------- This is because a buffer the size of "/path/to/bigfile" is allocated, instead of 1024 -------- Fix Go to file std/file.d line 327 change maxInitialAlloc to minInitialAlloc When done, the maxInitialAlloc is no longer usefull and may be removed.
Comment #1 by tbanelwebmin — 2013-01-11T13:50:05Z
OutOfMemoryError is still there in version 2.061 calling: read("/path/to/bigfile", 1024); The function void[] read(in char[] name, size_t upTo) is supposed to return at most "upTo" bytes, even for a very large file. But internally, in the Posix version, the allocated buffer is the size of the file (line 222: immutable initialAlloc = ... ) The fix is to consider "upTo" when computing "initialAlloc": immutable initialAlloc = to!size_t(min(upTo,statbuf.st_size ? min(statbuf.st_size + 1, maxInitialAlloc) : minInitialAlloc));
Comment #2 by github-bugzilla — 2016-03-20T23:40:51Z
Commit pushed to master at https://github.com/D-Programming-Language/phobos https://github.com/D-Programming-Language/phobos/commit/832928adafe372b7fd920b0d7c2e2cf433984b47 Don't allocate more than upTo bytes in std.file.read Fixes Issue 7972