T.
"Joseph M. Newcomer" <newcomer@flounder.com> schrieb im Newsbeitrag
As others have pointed out, it is absolutely impossible to tell the source
of the problem
without seeing the code.
CStringA line;
while(file.ReadString(line))
{ /* read loop */
line.Trim();
if(line.IsEmpty())
continue;
if(line[0] == '$' && line[1] == '$')
continue;
int n = line.Find('=');
if(n < 0)
{ /* bad line */
... deal with reporting bad line
continue;
} /* bad line */
mymap[line.Left(n)] = line.Right(n+1);
} /* read loop */
Even with a bad hashing function on the map, it is difficult for me to
guess how this loop
could require 15ms/line. If you show your code, be sure to show the hash
function you
added for the CMap.
Also, the above code does not check for duplicates. If you are checking
for duplicates,
CMap with a poor hashing algorithm might be a Really Bad Choice.
As already pointed out, you would be better off using std::map.
I just measured my PowerPoint Indexer, which reads 1300 rules from a rule
file, all are of
the form name=value; it runs in less than 1 second (I set a breakpoint at
the start of the
loop, one at the end, and measured the time with a stopwatch). I use
std::map to handle
the data, and I do check for duplicates.
typedef std::pair<CString, CString> RuleElement;
typedef std::map<CString, CString, std::lessstr<CString> > RuleMap;
RuleMap rules;
rules.insert(RuleElement(line.Left(n), line.Right(n+1));
should do it. To look up a rule, do
RuleMap::iterator rule = rules.find(name);
if(rule == rules.end())
... not found
else
... use rule->second to get the value associated with name
joe
On Mon, 20 Oct 2008 16:11:32 +0200, Anders Eriksson <andis59@gmail.com>
wrote:
Hello,
I have a program that reads a text file and then 'parses' it using these
rules
The data is lined based => split on \n
if the line starts with $$ it's a comment => ignore, jump to next line
each line cosist of a key and a value separated with an =
e.g.
TEXT="Hello World"
The parser will create a CMap with the key as Key and value as Value.
This has to happen fast! At the moment I need about 6 seconds to read and
parse a file that is 400 lines (about 7KB) which is far to slow. I need to
do it in max 2 seconds or less...
So how can I parse the file as fast as possible?
// Anders
Joseph M. Newcomer [MVP]
email: newcomer@flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm