Re: Great SWT Program
On Nov 21, 8:56 pm, twerpina...@gmail.com wrote:
On Nov 20, 2:02 am, Owen Jacobson <angrybald...@gmail.com> wrote:
Of course, this is only of interest if you're doing remote stuff.
Given that I do a lot of remote work over the internet at large...
Given that your computing needs are rather specialized and unusual,
your experiences cannot really provide much of a useful guideline for
the rest of the world in general.
And yet, there is a large enough community of computer users to
support the development of tools to support my needs, saving me from
having to write them myself. There is also a relatively large pool of
users for whom the same tools provide arguable benefit at best, for
any of a number of reasons. Even Big Vendors (Apple, Microsoft, Sun,
as well as less-known names like Be, before they got bought out), who
are only in it for the money, are improving, rather than eliminating,
support for text-mode tools and command lines.
(The new version of OS X adds first-party support for SSH passphrases
in Keychain. I'm very pleased.)
On the other hand, when working locally I use a hybrid of command-line
and graphical tools. Bash is an extremely powerful automation tool,
for all its faults
Ah, someone here (other than me) finally acknowledges that those
things have faults.
I don't think anyone here has claimed that emacs or vi or bash or any
other program is perfect and flawless. If you believe I've professed
otherwise myself, please, cite a message ID or quote a passage so that
I may explain more clearly what I meant, or correct my mistake. I
just dispute whether having faults I can understand and work with
(even if others apparently can't or won't) makes a tool worthless. I
also find your definition of "faults" to be pretty wildly divergent
from mine.
The only place perfect programs exist is in Discrete Mathematics
textbooks. All actual software sucks.
The GUI tools I use [cognitive dissonance deleted]
The drugs you're on ... can I have some, please? Or at least be hooked
up with your supplier? :)
Absolutely.
<http://www.apple.ca/>
<http://www.macromates.com/>
Enjoy!
-- My God, why are you connecting to one machine to then operate an
SVN client, check out some files, and edit them remotely, instead of
connecting to the repository directly from your local machine and
skipping the middleman?!
Frequently I have the same files checked out in both places. What of
it? Some tasks are easier or more natural to do in-place: have you
ever tried to configure JBoss without editing things in place? It's
even more spectacularly unpleasant than the alternative.
And when Internet 2 becomes widespread the way Internet 1 did, or they
just get enough fiber and space-age switches installed at the backbone
and finally have fiber or WiMAX for the last mile, it will be painless
to use remote-dekstop software over the network too. It already is on
a 100MBit LAN.
I2 is not going to "replace" the internet; as has already been
happening for years, useful technologies developed for I2 will be
integrated with the existing internetnetwork, organically, when they
mature. I2 itself is just a set of eye-wateringly expensive network
links between some research facilities and universities; the quality
and capacity involved has scaled to remain unfeasible for the internet
at large even as the bandwidth of the internet has grown and will
probably continue to scale until various theoretical limits involving
information density are approached.
Did you have a point here? Thing being, the public Internet in a few
years will have the speeds and quality that I2 has now, although the
I2 at that time will also have advanced. The bandwidth to comfortably
run a Windows-style GUI remotely will be here in less than a decade
and probably no more than five years.
Expecting network bandwith to increase to fit the problem is one of
the fundamental distributed computing fallacies. It's very rare that
that strategy has ever worked out; it's much more likely that, by the
time bandwidth, latency, and reliability to the average remote node on
the internet are good enough that you can run a modern-as-of-2007 GUI
seamlessly over it,GUIs will have become even more bandwidth-
intensive. Certainly they seem to be heading that way now, with
marketing wonks and salescreatures in charge of adding pretty-but-
useless features like reflective widgets (that drive up the entropy of
the widget's graphics and make it hard to compress) and application-
controlled skins (that eschew standardized control drawing and make it
impossible to delegate rendering to the nearer-to-the-user peer) to
GUI programs.
And even if bandwidth does eventually catch up to GUIs, you can't go
faster than the speed of light: the fastest a remote interface can
possibly respond when you're sitting halfway around the world from the
computer the program is running on is around 120ms. When you're
typing, a steady, reliable tenth-of-a-second lag is annoying, but not
distressingly so. When you're waiting for a control to be redrawn or
for a menu to open, a tenth of a second is very jarring and
unpleasant. Try it yourself sometime: borrow Raymond Chen's example
program or one of the .net samples and add a 60ms sleep at the start
and another at the end of all the UI event handlers.