Re: new vs. vector vs. array
On Apr 24, 2:28 pm, Lionel B <m...@privacy.net> wrote:
On Fri, 24 Apr 2009 13:58:50 +0200, Mark S. wrote:
SG wrote:
On 24 Apr., 10:26, "Mark S." <markstar.n0s...@hotmail.com> wrote:
[...]
The problem is that I don't know how to deal with
multidimensional vectors or arrays. I was thinking of
using a single vector which is n*n big. Of course, I would
have to convert the numbers then (eg. for 3*3, [1][1]
would become [3]). Moreover, this solution does not seem
very elegant to me.
Why not?
Well, it means I have to spend a lot of extra time to
calculate which parts of the array I actually want to
read/write. That sounds very inefficient to me.
Actually, it's extremely efficient, easy for compilers to
optimize and very simple to implement (see Daniel T's post).
In any case, you shouldn't be worrying about efficiency at
this stage (google "premature optimization").
Especially since the data set is extremely small, so performance
is unlikely to be an issue.
The chief alternative is a "vector of vectors" approach, which
is probably more fiddly to implement and unlikely to be more
efficient.
On most modern machines, an indirection is more expensive than a
multiplication. If the data set was bigger, locality issues
would also argue against the vector of pointers.
Of course, if he really does want a vector of two dimensions,
the simplest solution is:
std::vector< std::vector< int > >
v( n, std::vector< int >( n ) ) ;
Under the hood, that's basically the vector of pointers
solution, at least with regards to performance, but it's a lot
easier to program than any of the alternatives, and the
difference in performance shouldn't matter unless he's thinking
in terms of a 1000x1000 Sudoku.
--
James Kanze (GABI Software) email:james.kanze@gmail.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34