Stress testing V
Posted: Sat Nov 28, 2015 11:17 am
So, the V website says that it can handle 100Mb or even 100Gb files with ease. I decided to put that to the test.
I used the technique at http://www.windows-commandline.com/how- ... ummy-file/ to generate a single 128Gb text file with the same line repeated. GNU win32 tools wc -l reported total number of lines in this file as 2147483680.
My test kit was: Core i7 4790, 16Gb RAM, 2Tb disk, Windows 8.1 with Classic Shell and V14 SR7 x64. Note this was physical hardware, not a VM.
Opening it in the test environment I could scroll around quite easily, search for content etc. Counting particular instances of a search phrase I gave up after around 26% searching as it took too long - this seems to be done as part of the same thread as the main application. Going to line 1000000 took a few seconds but got there easily enough. Using the goto command to return to the last number -1 again I gave up, but didn't get any pro.
The vertical scrollbar appears to show position within each files chunk rather than the file as a whole, I had to click down after the end of the file to move to the next chunk.
While I had this running I had task manager open. It seems ludicrous that on a system with 16Gb RAM and only 14% RAM usage in total, I couldn't get V to use more than 2.4Mb memory when there are obvious benefits to cacheing parts of such a large file. This seems to give little benefit to moving to the 64 bit version, whereas I would have thought files of this size would benefit greatly from this.
Attempts to get more diverse set of realistic data such as the complete works of shakespeare or dickens from Project Gutenberg as a base point for duplication made creating the file rather more difficult to give round numbers as they were 5.1 and 6.4Mb respectively so didn't scale well to give round numbers as a result. Another possible candidate for large scale testing would be a wikimedia dump such as those available from http://dumps.wikimedia.org/.
Overall, I'd be the first to admit that my techniques bear no resemblance to my own real world usage, and that therefore in some ways these results don't have any useful meaning. However, in that respect they are no worse than other stress testing colleagues and I have done using jmeter against web applications. My results do prove the websites advertising saying that this application can handle 100Gb files though.
I've probably spent more time stress testing my PC rather than V itself.
John
I used the technique at http://www.windows-commandline.com/how- ... ummy-file/ to generate a single 128Gb text file with the same line repeated. GNU win32 tools wc -l reported total number of lines in this file as 2147483680.
My test kit was: Core i7 4790, 16Gb RAM, 2Tb disk, Windows 8.1 with Classic Shell and V14 SR7 x64. Note this was physical hardware, not a VM.
Opening it in the test environment I could scroll around quite easily, search for content etc. Counting particular instances of a search phrase I gave up after around 26% searching as it took too long - this seems to be done as part of the same thread as the main application. Going to line 1000000 took a few seconds but got there easily enough. Using the goto command to return to the last number -1 again I gave up, but didn't get any pro.
The vertical scrollbar appears to show position within each files chunk rather than the file as a whole, I had to click down after the end of the file to move to the next chunk.
While I had this running I had task manager open. It seems ludicrous that on a system with 16Gb RAM and only 14% RAM usage in total, I couldn't get V to use more than 2.4Mb memory when there are obvious benefits to cacheing parts of such a large file. This seems to give little benefit to moving to the 64 bit version, whereas I would have thought files of this size would benefit greatly from this.
Attempts to get more diverse set of realistic data such as the complete works of shakespeare or dickens from Project Gutenberg as a base point for duplication made creating the file rather more difficult to give round numbers as they were 5.1 and 6.4Mb respectively so didn't scale well to give round numbers as a result. Another possible candidate for large scale testing would be a wikimedia dump such as those available from http://dumps.wikimedia.org/.
Overall, I'd be the first to admit that my techniques bear no resemblance to my own real world usage, and that therefore in some ways these results don't have any useful meaning. However, in that respect they are no worse than other stress testing colleagues and I have done using jmeter against web applications. My results do prove the websites advertising saying that this application can handle 100Gb files though.
I've probably spent more time stress testing my PC rather than V itself.
John