Re: Any tips?
On 3/16/2013 5:42 PM, James Kanze wrote:
really? I've used both (well not vim). I've also used Qt Creator. I'd
like to see how you measure "productivity"
Getting working code out of the door. Actually, creating and
maintaining working code effectively, for a complete definition
of working code (i.e. maintainable, tested, documented...).
and what evidence do you have that this is easier with vim/bash/make
than VS? Just saying so isn't enough. As I say I've worked in both
environments I don't see the massive disparity you claim. I think
there's a little bit of unix bigotry.
Personal experience. And I do know Windows and VS now.
Originally, I put my lower productivity under Windows down to
the fact that I wasn't familar with it. Now, it's gotten to the
point where every time I want to do something, I can't find the
proper support in VS, and none of the so called experts seem to
know how to do it either---they often seem surprised that such
things actually can be done.
I have the same personal experience in the opposite direction. Though
that makes me stop the claims in either way, rather conclude that some
people are way more productive than others; and by all means shall be
left alone to use their tools, whatever those are.
A claim like "now I'm familiar" makes me even more skeptic -- I
personally keep finding better tools I missed every few weeks despite
working with VS for 2+ decades.
The difference is,
perhaps, that in the Unix world, you expect to have to actually
read the compiler documentation, and determine what options you
need for your project.
It's not a difference. You're supposed to know the walk and the talk.
Yet I see majority of people around never bother, just poke around and
experiment on the fly. Then draw conclusion on the system they use.
Independently on the environment.
VS pops up with two default
configurations (Debug and Release), neither of which really
corresponds to what you might want to do in most cases.
IME that makes perfect sense for most things, and proliferation of
configs is suspicious at best. While there are plenty of examples to
handle different config options inside the module -- along with checks
or alterations of settings. As you can do much with #pragma, and from
outside use just define-s.
personal libraries are developed with with something like eight
different configurations: optimized or not, profiling or not,
multi-threaded or not.)
For libs it makes more sense, though to me profiling looks like a local
alteration rather than a config -- and ST I think I just never used --
does it have any actual advantage?
But AFAIK it is not hard to save your project as an appwizard template
so once you created your prototype with the 8 configs, can use it for
all subsequent projects.
For anything more complex, the fact that you don't have to
write a makefile is a win for a beginner.
not just beginners
It's a lot like C++. When you being, you're overwhelmed by how
much you need to know. With a little experience, however,
you've got a good tool set for your style of programming, and a
number of more or less standard idioms, and you can knock out
complex programs in very little time.
Yeah, but complex need not mean a certain way. I.e. code generation can
be used put in the build through a translator or a prebuild step -- but
you may just run it by hand and put the result in the sources directly.
Depending on the change probability the latter may be ways more
efficient. And simpler.
On windows the most prevalent generation step is related to COM, but
it's supported internally through #import. So you can travel many miles
before even the old-style, way limited VS project management becomes
The need to use makefile usually comes from non-native requirements,
cross-platform projects or people coming from other realm and bending
the system to their familiar way.
If you limit yourself to the IDE, just about anything useful.
Try creating a project in which several different sources are
generated from other programs. There's special mechanism for
the case where a single source and header are generated by
a single tool (e.g. lex and yacc), and you can have one (but
only one) pre-build step. But the build system doesn't
understand any dependencies created by the pre-build, and if you
want two or more operations, you have to wrap them into some
sort of script. (The fact that the build system decides what
needs recompiling *before* doing the pre-build is a serious
error, since the purpose of the pre-build is normally to
regenerate some files.)
ok. I think you have me! The last VS project I worked on we avoided
such things (probably because it was hard).
Or not really necessary. :) Actually with the msbuild-based way it's
not that hard, and examples can be found in OS projects.
The only thing that did do
this seemed to rebuild everytime even then it wasn't necessary.
Guess you did not specify inputs/outputs correctly. ;)
*Not* generating significant amounts of code by machine results
in an enormous loss of productivity. I'd have to go back 25 or
30 years to find a project in which there wasn't a lot of
machine generated code. Why should I waste my time doing things
the machine can do better?
IMO that should depend on the project nature. And with C++ you have a
plenty of internal mechanisms that work fine generating code (macros,
templates). The cases I recall related to generation had their substrate
changing only once in 2-4 weeks (database schema, XSD, message
descriptions), so executing the translators by hand was perfectly fair
game. And imposing it on every build be a waste of resources.
Sed, awk and grep are the three standard ones.
And with VS you have most of their use cases built-in. in advanced form
too, as you can select for scope "all files in project/solution" tuned
with extensions -- a thing that is hardly trivial from command line.
And hopefully everyone is aware of the native win32 build of all the gnu
tools that work fine from windoze command line -- and where the shell
rules are needed the easiest is to summon a git bash :)