fix the issue of not reading too long file.#4
Conversation
|
although a bit offtopic, but i recommend |
mrombout
left a comment
There was a problem hiding this comment.
I tried extracting from a 709585 bytes on a build without this patch and it extracted without problem. But a streaming approach is indeed better than loading the entire file into memory as was done previously.
| return poFile, err | ||
| f, e := os.Open(filePath) | ||
| if e != nil { | ||
| panic(e) |
There was a problem hiding this comment.
I think it's better to keep the panic()s in the main and use plain errors throughout the code base.
| panic(e) | |
| return poFile, err |
| content, err := fs.readFile(filePath) | ||
| if err != nil { | ||
| return poFile, err | ||
| f, e := os.Open(filePath) |
There was a problem hiding this comment.
Revert to err to keep in line with standard go practices.
| f, e := os.Open(filePath) | |
| f, err := os.Open(filePath) |
| content, err := fs.readFile(filePath) | ||
| if err != nil { | ||
| return poFile, err | ||
| f, e := os.Open(filePath) |
There was a problem hiding this comment.
In order to keep the tests working, this os.Open call needs to go through the fileSystem interface and then be mocked.
Or better yet, these days it's probably better to rely on fs.FS.
longer than 65565 bytes content will not be processed.