Discussion:
Qt 4.4 Released
(too old to reply)
Mike Margerum
2008-05-09 14:47:02 UTC
Permalink
QT 4.4 runs on windows CE now. That seals the deal for me. I'm buying
it.

I've bought bcb 2006/2007 basically as welfare in the hopes that borland
would go multiplatform or drastically improve the VCL and c++ standards
compliance.

Arevoire borland... i mean code gear... oh wait darnit i mean embarcadero.

http://www.osnews.com/story/19720/Qt_4.4_Released
na
2008-05-09 20:47:30 UTC
Permalink
QT 4.4 runs on windows CE now. That seals the deal for me. I'm buying it.
I've bought bcb 2006/2007 basically as welfare in the hopes that borland would go multiplatform or drastically improve
the VCL and c++ standards compliance.
Have you seen http://twinforms.com ? Looks promising...
Mike Margerum
2008-05-10 03:48:18 UTC
Permalink
I've spent quite a bit of time with wxWidgets. It's not too bad but QT
is much better designed imo and also has official support. Well worth
the money for me...
Andre Kaufmann
2008-05-11 05:15:32 UTC
Permalink
Post by Mike Margerum
I've spent quite a bit of time with wxWidgets. It's not too bad but QT
is much better designed imo and also has official support. Well worth
the money for me...
QT4 is a good framework, for sure. But the price of the commercial
version is >for me< too high, if the price I've found is correct.
And I don't know if Nokia is really interested in targeting all
platforms (in the future).

But I wouldn't use pure C++ for cross platform programming too.
You would have to use the C++ standard library and IMHO the C++
iostreams library is quite unusable regarding performance.

Andre
Alan Bellingham
2008-05-11 12:19:44 UTC
Permalink
Post by Andre Kaufmann
But I wouldn't use pure C++ for cross platform programming too.
You would have to use the C++ standard library and ...
Reasonable enough
Post by Andre Kaufmann
... IMHO the C++
iostreams library is quite unusable regarding performance.
So use std::fread() and std::fwrite(), then. If your environment isn't
providing those, it's not providing standard C++ anyway.

Alan Bellingham
--
Team Browns
ACCU Conference 2009: to be announced
Andre Kaufmann
2008-05-11 17:38:41 UTC
Permalink
Post by Alan Bellingham
[...]
So use std::fread() and std::fwrite(), then. If your environment isn't
providing those, it's not providing standard C++ anyway.
Not that I want to be pedantic, but this is IMHO the C way to write to
files, but I agree they are commonly faster, if you don't use fprintf
directly - but this depends on the implementation (see below):

E.g. writing 500000 integers [a real world example would be to stream a
XML file) to a text file on my computer (Vista, C++) - although this
additionally/mostly measures string conversion performance:
(code at the end of this post)

C++ COMPILER X: AMD 3500+ WIN VISTA
fprintf: 8417 ms
iostreams: 7329 ms
sprintf: 4463 ms
itoa: 2001 ms (not standard conforming ?)
C#: 2261 ms


C++ COMPILER X: INTEL QUAD CORE WIN2008
iostreams: 5091 ms
fprintf: 4586 ms
sprintf: 2322 ms
itoa: 1263 ms
C#: 1045 ms


C++ COMPILER Y: INTEL QUAD CORE WIN2008
fprintf: 4485 ms
sprintf: 2360 ms
iostreams: 2047 ms
itoa: 1656 ms
C#: 1045 ms



If I use the not standard conforming itoa function, C++ is reasonably
fast. If I convert the code, doing most of the standard stuff on my own
and using my own buffering it outperforms every other language.
But what is a C++ library good for, if I have to code everything on my
own to be reasonable performant.

I know that I can tune iostreams by using my own buffering and string
conversion, but then again I primarily code most of the stuff on my own,
which IMHO shouldn't be necessary.

Code samples:
-------------

fprintf:

FILE* file3 = fopen("fprintf.txt", "w");
for (unsigned int i = 0; i < 5000000; ++i) fprintf(file3, "%d", i);

sprintf:

FILE* file = fopen("sprintf.txt", "w");
char buffer[8];
for (unsigned int i = 0; i < 5000000; ++i) {
sprintf(buffer, "%d", i);
fwrite(buffer, strlen(buffer), 1, file);
}

itoa:
FILE* file2 = fopen("itoa.txt", "w");
char buffer[8];
for (unsigned int i = 0; i < 5000000; ++i) {
itoa(i, buffer, 10);
fwrite(buffer, strlen(buffer), 1, file2);
}

iostreams:
ofstream f("ofstream.txt");
for (unsigned int i = 0; i < 5000000; ++i) f << i;


C#:
StreamWriter f = File.CreateText("csharp.txt");
for (uint i = 0; i < max; ++i) f.Write(i);


If I made some essential errors please let me know. Once again I
wouldn't code this way in C/C++ to write a bulk of integers in text
format to a file, but why shouldn't this simple (to write) code not be
fast too ?
Post by Alan Bellingham
Alan Bellingham
Andre
Alan Bellingham
2008-05-11 18:42:58 UTC
Permalink
Post by Andre Kaufmann
Post by Alan Bellingham
So use std::fread() and std::fwrite(), then. If your environment isn't
providing those, it's not providing standard C++ anyway.
Not that I want to be pedantic, but this is IMHO the C way to write to
files,
But they *are* provided by the C++ language, by adoption. So although it
*is* the C way, it is also *a* C++ way.
Post by Andre Kaufmann
If I made some essential errors please let me know. Once again I
wouldn't code this way in C/C++ to write a bulk of integers in text
format to a file, but why shouldn't this simple (to write) code not be
fast too ?
iostreams is a high level abstraction built on lower levels (and
implementations can vary in quality). Throwing away the entire language
because you find the high-level abstraction insufficiently fast is odd
when you still have the low level stuff available, and can make it
super-fast.

For what it's worth, by the way, I do everything via the stringstreams,
slurping the entire file in or out with a single file read when
required. None of my files are very large, though.

Alan Bellingham
--
Team Browns
ACCU Conference 2009: to be announced
Andre Kaufmann
2008-05-12 06:12:45 UTC
Permalink
Post by Alan Bellingham
[...]
Post by Andre Kaufmann
If I made some essential errors please let me know. Once again I
wouldn't code this way in C/C++ to write a bulk of integers in text
format to a file, but why shouldn't this simple (to write) code not be
fast too ?
iostreams is a high level abstraction built on lower levels (and
That doesn't mean that it can't be fast ? And I don't think that it's
such a high level abstraction. A template using the << operator, which
writes a single variable to a file / buffer.
Post by Alan Bellingham
implementations can vary in quality). Throwing away the entire language
because you find the high-level abstraction insufficiently fast is odd
when you still have the low level stuff available, and can make it
super-fast.
I don't want to throw the language away. But I don't understand why it
can't be changed or discussed.
Post by Alan Bellingham
For what it's worth, by the way, I do everything via the stringstreams,
slurping the entire file in or out with a single file read when
required. None of my files are very large, though.
O.k - but if you don't have to handle large files the speed difference
can be neglected. But if you store a lot of data in memory it will be
slower, than if you handle small chunks of data in memory only.
Post by Alan Bellingham
Alan Bellingham
Andre
unknown
2008-05-12 15:29:21 UTC
Permalink
Post by Andre Kaufmann
Post by Alan Bellingham
iostreams is a high level abstraction built on lower levels (and
That doesn't mean that it can't be fast ? And I don't think that it's
such a high level abstraction. A template using the << operator, which
writes a single variable to a file / buffer.
But abstractions tend to be generic.
And specific is usually faster than generic because
at some point, the generic must become specific.

Consider also, that in addition to wrapping the fwrite()
operation (slower than hand-coded), the << must decide
how to format the variable (sprintf). As you have seen,
the unformatted itoa is much faster.

You might further test the abstraction layer file speed by
writting fixed string "Data" instead of number conversions.
I suspect fwrite would be insignificantly faster than <<.
You could also try with/without strlen().
Andre Kaufmann
2008-05-13 04:31:00 UTC
Permalink
Post by unknown
[...]
Consider also, that in addition to wrapping the fwrite()
operation (slower than hand-coded), the << must decide
how to format the variable (sprintf). As you have seen,
the unformatted itoa is much faster.
I haven't profiled why iostream is that slow, but I think it's mainly
because (in one of the implementations of iostream):

- exception handling
- sprintf is used in one implementation.
- some other overhead

If the implementation of itoa is faster, then it could be used instead
of sprintf - couldn't it ?
But deriving from other posts, I think there isn't that much interest in
why iostreams are that slow, it's simply seen as fact and critics aren't
welcome (that much).
Am I the only one, who thinks it to be ridiculous, that one
implementation is 5 times slower than the equivalent C# code ?
(I always thought code for a virtual machine to have much more overhead
than C++ code)

Side note: itoa could also be somewhat faster too, if the radix
parameter would be specialized.
Post by unknown
[...]
Andre
Chris Uzdavinis (TeamB)
2008-05-13 12:30:14 UTC
Permalink
Post by Andre Kaufmann
I haven't profiled why iostream is that slow, but I think it's mainly
- exception handling
- sprintf is used in one implementation.
- some other overhead
In one program I was working on, I made direct use of sprintf to
create a massive number of strings sent across the network.
(Representing every stock market data change for a single data
source.)

Profiling revealed that 52% of the entire runtime was spent inside
sprintf, and I only did things as complicated as:

sprintf(buf, "%d", val);

and similar for doubles. Since I wasn't using any of the fancy format
options, it was wasteful to use a function that parsed them every
single time (like sprintf). I hand-wrote an optimized int-to-string
function that was specific to my needs and simple format requirements,
and vastly reduced the amount of calls to sprintf. As a result, the
program spent less than 2% in sprintf's internals, and along with a
few other optimizations, throughput increased by 20,000 messages per
second.

(This was using g++'s standard library, not C++ Builder, so don't
extrapolate too much here.)

I am now of the position that the printf family of functions are not
so fast afterall, since you pay for the generality.
--
Chris (TeamB);
unknown
2008-05-13 16:02:47 UTC
Permalink
Post by Andre Kaufmann
If the implementation of itoa is faster, then it could be used instead
of sprintf - couldn't it ?
No.
iostream allows for formatting. itoa doesn't.
Post by Andre Kaufmann
But deriving from other posts, I think there isn't that much interest in
why iostreams are that slow, it's simply seen as fact and critics aren't
welcome (that much).
You could specialize your own ostream, removing
the builtin flexiblility.
And, as Darko mentioned, there are people who
have bitten the bullet and written faster versions.

IMO most people who require faster code, write specialized
code to handle the slow parts, and just accept slow code as
the price of faster coding on the parts that don't need to be fast.

I tend towards, all code should be fast, even if it makes
no noticable difference. That's why I coded my business
apps in 100% ASM for 20 years.
Post by Andre Kaufmann
Am I the only one, who thinks it to be ridiculous, that one
implementation is 5 times slower than the equivalent C# code ?
(I always thought code for a virtual machine to have much more overhead
than C++ code)
Realize a few things:
StreamWriter is not C++ standard, has no legacy, and can
be optimized at Microsoft's discretion.
In particular, it can buffer as much data as it wishes
before commiting the data to file.
There also appears to be no formatting options available,
making it faster.
Post by Andre Kaufmann
Side note: itoa could also be somewhat faster too, if the radix
parameter would be specialized.
Yes, by a few cycles, not more than 1 or 2% faster.
Andre Kaufmann
2008-05-13 20:14:39 UTC
Permalink
Post by unknown
Post by Andre Kaufmann
If the implementation of itoa is faster, then it could be used instead
of sprintf - couldn't it ?
No.
iostream allows for formatting. itoa doesn't.
Why do I have to pay for something I don't use (in this case):

if (no_formatting_specified) UseItoa(value)
else ......
Post by unknown
[...]
I tend towards, all code should be fast, even if it makes
no noticable difference. That's why I coded my business
apps in 100% ASM for 20 years.
Huh. Sarcasm ? If it's not significant I don't optimize.
I only optimize if my code got significantly slow.
And somehow I find iostreams performance to be significant for most of
my applications.

I remember having ported some Delphi code to C++ years ago and I used
iostreams. Then I wondered why the code got that slow. I did something
quite simple, reading hex values from files up to 500K in size.
Post by unknown
Post by Andre Kaufmann
Am I the only one, who thinks it to be ridiculous, that one
implementation is 5 times slower than the equivalent C# code ?
(I always thought code for a virtual machine to have much more overhead
than C++ code)
StreamWriter is not C++ standard, has no legacy, and can
No it's C# standard - and what does that mean ?
There aren't that many C++ standard library distributions, so why can't
they be optimized.

Another example is the new tr1::tuple implementation. I checked how many
times the object is constructed and copied. 3-4 times the objects
returned with a tuple are copy constructed and destroyed. Returning
simple reference pointers in a tuple might cause the reference counter
to be incremented / decremented multiple times.
But since C++ currently lacks move semantics, I think it can't be
Post by unknown
be optimized at Microsoft's discretion.
In particular, it can buffer as much data as it wishes
before commiting the data to file.
Ehm, iostreams can't ? I just wonder what "<< flush" is for ?
Post by unknown
There also appears to be no formatting options available,
making it faster.
Once again an example for: Don't pay for something you don't use:

Write a formatted string with C# to a file:

Function: StreamWriter.Write(Format, [Objects....]):

Example: w.Write("{0:D6} {1} {2} {0}", 1, 2, 3);
Output: 000001 2 3 000001

And by the way: using the format function instead of the overloaded one,
C# is still faster than some iostream implementations (and C# uses
unicode strings - therefore has more overhead)
But let's forget about C# - I don't want to start a C# is better than
C++ discussion thread / war.
Post by unknown
Post by Andre Kaufmann
Side note: itoa could also be somewhat faster too, if the radix
parameter would be specialized.
Yes, by a few cycles, not more than 1 or 2% faster.
Sorry to disappoint you. It can be up to 200%-300% times faster,
depending on the platform and the compiler.

And don't assume anything regarding performance - you just have to
follow 3 rules regarding performance:

a) measure
b) measure
c) once again measure

Why can a specialized radix itoa faster ?

It's quite simple if the radix is used in a "/" and "%" operation -
division / remainder. If the compiler can determine the radix it will
convert the division to a fix-comma multiplication and shift operations.
In combination these are still faster than a division.

But I agree, compared to the total runtime it might have a negligible
influence on the performance of a typical C++ application. On the other
side, it's quite simple to specialize itoa for the 2 most used radixes:
10, 16

I'm >not< arguing C++ and the C++ library to be generally slow. For
example I'm perfectly happy with the performance of the STL (and the
handling).

Andre
unknown
2008-05-14 15:57:09 UTC
Permalink
Post by Andre Kaufmann
Post by unknown
iostream allows for formatting. itoa doesn't.
Because you _might_ use it.
If you _won't_ use it, don't use streams if you also want speed.
Or use a specialized stream.
Post by Andre Kaufmann
Post by unknown
I tend towards, all code should be fast, even if it makes
no noticable difference. That's why I coded my business
apps in 100% ASM for 20 years.
Huh. Sarcasm ?
No, truth.
My app started before CP/M when 8 to 48k had to hold
application, screen, data and home-made OS.
There was nothing that wasn't optimized.
Only in the last few years have I said occasionally
"Oh well, that won't matter". I cringe every time
I add inefficient code, but remind myself that it
is easier to understand (if it isn't, then I don't
do it). But I still go for the best "big picture"
algorythm" I can think up.
Post by Andre Kaufmann
If it's not significant I don't optimize.
I only optimize if my code got significantly slow.
I try to not let it get slow in the first place.
Post by Andre Kaufmann
And somehow I find iostreams performance to be significant for most of
my applications.
Then quit using them.
They are a convenience. They are not required.
Post by Andre Kaufmann
I remember having ported some Delphi code to C++ years ago and I used
iostreams. Then I wondered why the code got that slow. I did something
quite simple, reading hex values from files up to 500K in size.
Then you should have learned your lesson and quit using them.
Post by Andre Kaufmann
Post by unknown
StreamWriter is not C++ standard, has no legacy, and can
No it's C# standard - and what does that mean ?
It means streams must carry all of the baggage of the past.
StreamWriter is new, and so can rid itself of backwards compatability.
Post by Andre Kaufmann
There aren't that many C++ standard library distributions, so why can't
they be optimized.
Because they must follow the cumbersom rules of the Standard,
including the sink from your mother-in-law's second unused kitchen.
Don't use something you don't want to pay for.
Post by Andre Kaufmann
Why can a specialized radix itoa faster ?
It's quite simple if the radix is used in a "/" and "%" operation -
division / remainder. If the compiler can determine the radix it will
convert the division to a fix-comma multiplication and shift operations.
In combination these are still faster than a division.
Yes, I read that somewhere, but can't remember how to actually do it.
Once again we see that the "easy way" is slower.
itoaloop:
div eax,10
or dl,'0'
mov [edi],dl
dec edi
or eax,eax
jnz itoaloop
Post by Andre Kaufmann
I'm >not< arguing C++ and the C++ library to be generally slow. For
example I'm perfectly happy with the performance of the STL (and the
handling).
It is an unfortunate aspect of programming that
the simplest program is often the slowest.
Andre Kaufmann
2008-05-14 19:46:11 UTC
Permalink
Post by unknown
[...]
Post by Andre Kaufmann
I remember having ported some Delphi code to C++ years ago and I used
iostreams. Then I wondered why the code got that slow. I did something
quite simple, reading hex values from files up to 500K in size.
Then you should have learned your lesson and quit using them.
I don't use them - but I would if they would be faster.
Post by unknown
[...]
div eax,10
or dl,'0'
mov [edi],dl
dec edi
or eax,eax
jnz itoaloop
DIV 10 - number to divide by 10 in ecx - result in edx:

mov eax,0CCCCCCCDh
mul eax,ecx
shr edx,3
Post by unknown
[...]
Andre
unknown
2008-05-14 22:30:10 UTC
Permalink
Hmmm...
multiply by 0xCCCCCCCD
divide by 2^35 (shift right 3 the upper 32bit result)
or 3,435,973,837 / 34,359,738,368
It would be even faster if it didn't require another
multiply and subtract to get the remainder.
Still, almost twice as fast on this test.

For simplicity, this is weird, dangerous call protocall.
It also ignores negtives.

char* _fastcall itoa_div( int i, char* buf )
{
asm{
push edi
mov edi,buf
mov ecx,10
mov byte ptr[edi],0
itoaloop:
xor edx,edx
dec edi
div ecx
or dl,'0'
mov [edi],dl
or eax,eax
jnz itoaloop
mov eax,edi
pop edi
};
return (char*)_EAX;
}
char* _fastcall itoa_mul( int i, char* buf )
{
asm{
push edi
push ebx
mov edi,buf
mov ecx,i;
mov byte ptr[edi],0
again:
dec edi
mov eax,0xCCCCCCCD
mul eax,ecx
shr edx,3
mov ebx,edx // x / 10
mov eax,10
mul ebx // x / 10 * x
sub ecx,eax // remainder
or cl,'0'
mov [edi],cl
mov ecx,ebx // x / 10
or ecx,ecx // recurse
jnz again
mov eax,edi
pop ebx
pop edi
};
return (char*)_EAX;
}


int main (void)
{
int i;
char answer[16];

DWORD first, second, third;

first = GetTickCount();
for( i = 0; i<500000; ++i )
itoa_div( 98765432, &answer[15] );

second = GetTickCount();
for( i = 0; i<500000; ++i )
itoa_mul( 98765432, &answer[15] );

third = GetTickCount();

printf( "\nitoa_div = %i", second - first );
printf( "\nitoa_mul = %i", third - second );
printf( "\n98765432 itoa_div = %s\n", itoa_div( 98765432, &answer[15] ) );
printf( "\n98765432 itoa_mul = %s\n", itoa_mul( 98765432, &answer[15] ) );

getch();
return 0;
}

itoa_div = 110
itoa_mul = 60
98765432 itoa_div = 98765432
98765432 itoa_mul = 98765432
Alex Bakaev [TeamB]
2008-05-14 22:32:22 UTC
Permalink
Andre Kaufmann wrote:
Not sure what this code does...
Post by Andre Kaufmann
mov eax,0CCCCCCCDh
mul eax,ecx
shr edx,3
From http://courses.ece.uiuc.edu/ece390/books/artofasm/CH09/CH09-6.html:

Divide ax by 10
mov dx, 6554 ;Round (65,536/10)
mul dx
unknown
2008-05-15 00:02:06 UTC
Permalink
Post by Alex Bakaev [TeamB]
Not sure what this code does...
mul by 3.4 billion
divide by 34 billion (shift right 3+32 bits).
Post by Alex Bakaev [TeamB]
Post by Andre Kaufmann
mov eax,0CCCCCCCDh
mul eax,ecx
shr edx,3
Divide ax by 10
mov dx, 6554 ;Round (65,536/10)
mul dx
Hmmm, you are suggesting simple mul by 0x1999999A
Same resulting code (one less shift).

mov eax,0x1999999A
mul ecx
// result in edx

itoa_div = 141
itoa_mul = 80
itoa_mul2 = 80
Ed Mulroy [TeamB]
2008-05-12 01:10:17 UTC
Permalink
Post by Andre Kaufmann
Not that I want to be pedantic, but this is IMHO the C way to write to
files...
Are you implying that as it is a C way it therefore is inherently inferior
to a C++ way?

And of course, fwrite IS part of C++.

. Ed
Andre Kaufmann
2008-05-12 06:22:11 UTC
Permalink
Post by Ed Mulroy [TeamB]
Post by Andre Kaufmann
Not that I want to be pedantic, but this is IMHO the C way to write to
files...
Are you implying that as it is a C way it therefore is inherently inferior
to a C++ way?
Yes. Because it's not type safe. As you surely know.
And the old C functional interfaces are the main reason for buffer
overflows, because they are inherently unsafe.

Or won't you agree that:

file << string;

is safer and more type safe than:

fprintf(file, "%s", buffer)

?
Post by Ed Mulroy [TeamB]
And of course, fwrite IS part of C++.
I've not neglected this. Only said that it has a somewhat C'ish interface.

I don't think we have to discuss why C is part of C++ ? Have we ? ;-)
Post by Ed Mulroy [TeamB]
. Ed
Andre
Ed Mulroy [TeamB]
2008-05-12 12:44:49 UTC
Permalink
Post by Andre Kaufmann
Yes. Because it's not type safe. As you surely know.
"safe" and "type safe" are not the same thing. If something is not "safe",
it is a serious issue. If something is not "type safe" the consequence is
that you must pay attention to what you are doing. Characterizing that as
"inferior" is incorrect.
Post by Andre Kaufmann
And the old C functional interfaces are the main reason for buffer
overflows, because they are inherently unsafe.
A couple of functions have no ability to avoid such things. You do not
decry those functions. You damn the entire API.
Post by Andre Kaufmann
file << string;
fprintf(file, "%s", buffer)
You created 'buffer' so are responsible its contents. If 'buffer' is
incorrect, that is due to a failure to correctly design and code and not a
consequence of which I/O routine was used. Examination and additional
handling of it by the '<<' call is redundant.

Not the best example. Overflow danger arguments might better be attributed
to input than output.

If you rail against scanf, fscanf or gets then you have an argument, but one
which is against those specific functions and not against the whole C API.

. Ed
Post by Andre Kaufmann
Post by Ed Mulroy [TeamB]
Post by Andre Kaufmann
Not that I want to be pedantic, but this is IMHO the C way to write to
files...
Are you implying that as it is a C way it therefore is inherently
inferior to a C++ way?
Yes. Because it's not type safe. As you surely know.
And the old C functional interfaces are the main reason for buffer
overflows, because they are inherently unsafe.
file << string;
fprintf(file, "%s", buffer)
?
Post by Ed Mulroy [TeamB]
And of course, fwrite IS part of C++.
I've not neglected this. Only said that it has a somewhat C'ish interface.
I don't think we have to discuss why C is part of C++ ? Have we ? ;-)
Chris Uzdavinis (TeamB)
2008-05-12 14:53:12 UTC
Permalink
"Ed Mulroy [TeamB]" <***@bitbuc.ket> writes:

[snip]

Where the C interface is clearly inferior is with generic code, and
non-fundamental types.


template <class T>
void f(T obj)
{
sprintf("???", obj);
}

Not only is it hard (but not impossible) to determine the proper
format specifier for types (though it is possible):


char const * format_str_for_type(int) { return "%d";}
char const * format_str_for_type(char) { return "%c";}
char const * format_str_for_type(float) { return "%f";}
char const * format_str_for_type(char const *) { return "%s";}

template <class T>
char const * format_str_for_type(T const &) {
std::ostringstream oss;
oss << "Unknown type - unable to return printf format str for: ";
oss << typeid(T).name();
throw oss.str();
}

etc.

template <class T>
void f(T obj)
{
sprintf(format_str_for_type(obj), obj);
}


There still is the problem with non-primitive types. You are not
allowed to pass C++ objects through functions taking ... parameters.
You are not allowed to extend the types of things that the C io
library handles. In short, you have to build a custom streamer on top
of the C syntax that is unrelated to the C syntax, for any
non-primitive types.

For safety, we would need some sort of meta-programming inside f() to
check if type T is a primitive type or not. If it is, THEN it can
call sprintf and use the format_str_for_type() helper. Otherwise, it
must do "something else".
--
Chris (TeamB);
Ed Mulroy [TeamB]
2008-05-12 16:57:51 UTC
Permalink
Your reply seems to be mis-aimed. It creates and attempts to give a C++
solution for a problem not mentioned. Perhaps you might post it again as a
reply to the message to which you were intending to reply.

. Ed
Post by Chris Uzdavinis (TeamB)
[snip]
Where the C interface is clearly inferior is with generic code, and
non-fundamental types.
template <class T>
void f(T obj)
{
sprintf("???", obj);
}
Not only is it hard (but not impossible) to determine the proper
char const * format_str_for_type(int) { return "%d";}
char const * format_str_for_type(char) { return "%c";}
char const * format_str_for_type(float) { return "%f";}
char const * format_str_for_type(char const *) { return "%s";}
template <class T>
char const * format_str_for_type(T const &) {
std::ostringstream oss;
oss << "Unknown type - unable to return printf format str for: ";
oss << typeid(T).name();
throw oss.str();
}
etc.
template <class T>
void f(T obj)
{
sprintf(format_str_for_type(obj), obj);
}
There still is the problem with non-primitive types. You are not
allowed to pass C++ objects through functions taking ... parameters.
You are not allowed to extend the types of things that the C io
library handles. In short, you have to build a custom streamer on top
of the C syntax that is unrelated to the C syntax, for any
non-primitive types.
For safety, we would need some sort of meta-programming inside f() to
check if type T is a primitive type or not. If it is, THEN it can
call sprintf and use the format_str_for_type() helper. Otherwise, it
must do "something else".
Andre Kaufmann
2008-05-13 05:07:24 UTC
Permalink
Post by Ed Mulroy [TeamB]
Your reply seems to be mis-aimed. It creates and attempts to give a C++
I think it isn't mis-aimed.
Post by Ed Mulroy [TeamB]
solution for a problem not mentioned.
You stated the C interface not to be inferior ? Didn't you ?
May I cite:

'Characterizing that as "inferior" is incorrect.'
Post by Ed Mulroy [TeamB]
Perhaps you might post it again as a
reply to the message to which you were intending to reply.
[...]
Andre
Andre Kaufmann
2008-05-12 16:04:36 UTC
Permalink
Post by Ed Mulroy [TeamB]
Post by Andre Kaufmann
Yes. Because it's not type safe. As you surely know.
"safe" and "type safe" are not the same thing. If something is not "safe",
it is a serious issue.
My opinion is that many (not all) C functions are unsafe and some
additionally are not type safe.

And please don't over interpret unsafe as "totally unsafe". I'm using
unsafe in the context of comparing C++ to C, meaning that the C++
functions / objects are safer to use as the C functions.

If I would generalize unsafe I couldn't use integers (generally) safely
either.

E.g.:

if (lenString1 + lenString2 > maxStringLen)

might be unsafe too.
Post by Ed Mulroy [TeamB]
If something is not "type safe" the consequence is
that you must pay attention to what you are doing. Characterizing that as
"inferior" is incorrect.
The problem is that you sometimes can't pay attention, without
implementing the functions on your own, because you can't specify the
length of the output buffers in many / perhaps all C functions using
output buffers.
Post by Ed Mulroy [TeamB]
Post by Andre Kaufmann
And the old C functional interfaces are the main reason for buffer
overflows, because they are inherently unsafe.
A couple of functions have no ability to avoid such things. You do not
decry those functions. You damn the entire API.
The discussion was mainly about the file/string functions. I have only
argued that I think the C functions to be not safe enough, because you
can make many mistakes, which are simply not possible with their C++
counterparts.

So with "C functional interfaces" I meant file/string functions, not
generally the whole interface.
Post by Ed Mulroy [TeamB]
[...]
You created 'buffer' so are responsible its contents.
And if you are making mistakes and can't control the buffer or
formatting string ? How do you ensure that the fprintf function doesn't
overwrite the buffer boundaries. Where can you specify that the
resulting string doesn't overwrite the buffer boundaries.

E.g:

char format[] = "%s %d %f %s %d";
sprintf(buffer, format, s1, i1, f1, s2, i2);

Do you really expand all the arguments in [format] to check the
resulting length ?
Post by Ed Mulroy [TeamB]
If 'buffer' is
incorrect, that is due to a failure to correctly design and code and not a
No. fprintf, sprintf, printf is unsafe by design, since you can't
specify a maximum length for
Post by Ed Mulroy [TeamB]
consequence of which I/O routine was used. Examination and additional
handling of it by the '<<' call is redundant.
Short example. You have the string:

yyyyy: %s xxxxxx: %d in a resource file.

Which gets (false) translated in another language to:

rrrrr: %d zzzzzz: %s in a resource file.

O.k. the translator made the mistake, but how does the developer prevent
any unexpected crashes due to false format strings with sprintf ?

Regarding readability I would favor the C way over stringstreams, but
IMHO the best alternative would be to use the C# or Boost way:

cout << boost::format("writing %1%, x=%2% : %3%-th try")
% "hello" % 1 % 2;

Type safe and more overflow safe than printf and more readable than
stringstreams - same as printf.
Post by Ed Mulroy [TeamB]
Not the best example. Overflow danger arguments might better be attributed
to input than output.
Where does the input - the data I write to the file come from ?
Post by Ed Mulroy [TeamB]
If you rail against scanf, fscanf or gets then you have an argument, but one
which is against those specific functions and not against the whole C API.
The discussion was about file functions and perhaps additionally string
functions. I don't want to express all C functions to be generally
unsafe. But there are many of them which are unsafer as their C++
counterparts.
Post by Ed Mulroy [TeamB]
. Ed
Andre
unknown
2008-05-12 17:39:54 UTC
Permalink
Post by Andre Kaufmann
Post by Ed Mulroy [TeamB]
You created 'buffer' so are responsible its contents.
And if you are making mistakes and can't control the buffer or
formatting string ?
If you have no control, then you can't use a function
that requires control. Functions that don't require
control must be more complex (slower).

OTOH if you _do_ have control, then a function that
requires control can be much faster because it relies
on write-time checks instead of run-time checks.
Post by Andre Kaufmann
How do you ensure that the fprintf function doesn't
overwrite the buffer boundaries. Where can you specify that the
resulting string doesn't overwrite the buffer boundaries.
By the use of precision.
char buffer[31];
sprintf( buffer, "%.30s", somecharptr );
Post by Andre Kaufmann
char format[] = "%s %d %f %s %d";
sprintf(buffer, format, s1, i1, f1, s2, i2);
Do you really expand all the arguments in [format] to check the
resulting length ?
Normally, yes. I give each one a size/precision.
If it is just debug code, no, but then buffer
is 1k and s is known to be small.
Post by Andre Kaufmann
yyyyy: %s xxxxxx: %d in a resource file.
rrrrr: %d zzzzzz: %s in a resource file.
O.k. the translator made the mistake, but how does the developer prevent
any unexpected crashes due to false format strings with sprintf ?
If that is an issue, the programmer must either
Lock down the strings or
1) provide for rearranged data.
2) build the format string at runtime from a field table.
or
3) build the output string directly from the field table (easier).
Post by Andre Kaufmann
Post by Ed Mulroy [TeamB]
Not the best example. Overflow danger arguments might better be attributed
to input than output.
Where does the input - the data I write to the file come from ?
Presumably from inside your application, after having been
read/checked/cleaned up from wherever it originated.
Post by Andre Kaufmann
The discussion was about file functions and perhaps additionally string
functions. I don't want to express all C functions to be generally
unsafe. But there are many of them which are unsafer as their C++
counterparts.
Safety costs.
The question then is do you spend your safety dollars
up-front (development) or runtime (poor performance)?
Andre Kaufmann
2008-05-13 05:03:47 UTC
Permalink
Post by unknown
[...]
OTOH if you _do_ have control, then a function that
requires control can be much faster because it relies
on write-time checks instead of run-time checks.
O.K. But adding a simple output buffer length parameter to each print
function wouldn't be IMHO that much overhead.
Post by unknown
Post by Andre Kaufmann
How do you ensure that the fprintf function doesn't
overwrite the buffer boundaries. Where can you specify that the
resulting string doesn't overwrite the buffer boundaries.
By the use of precision.
char buffer[31];
sprintf( buffer, "%.30s", somecharptr );
Good idea. But I meant to be able to specify the length of the buffer
for the sprintf function itself, as in the sprintf_s function (strsafe.h
- Windows SDK).

It's IMHO safer, if the print function checks by itself if the buffer
end has been reached - you can't simply forget to check it.
Post by unknown
[...]
Post by Andre Kaufmann
O.k. the translator made the mistake, but how does the developer prevent
any unexpected crashes due to false format strings with sprintf ?
If that is an issue, the programmer must either
Lock down the strings or
1) provide for rearranged data.
2) build the format string at runtime from a field table.
or
3) build the output string directly from the field table (easier).
Yes. This is what iostreams do ;-) - concatenating output strings.

But you could simple remove the type specifier and let the compiler do
it's task - calling the convert to string function for each parameter in
a type safe way - as the boost format function does.

E.g. instead:

"%s %d %d %d"

"%0% %1% %2% %3%"

%x% means insert here parameter number [x].

So the translator can even rearrange the format string and position of
the parameters. The output string can also be false, but since the
function returns a string no output buffer can be overwritten.
Post by unknown
[...]
Safety costs.
The question then is do you spend your safety dollars
I doesn't cost that much, to use safe C string functions.
You simply could use strsafe.h from the Windows-SDK. The downside is
that it's not part of the C/C++ standard
and therefore not platform independent.
IIRC it has been rejected by the committee, but I don't know why.
Post by unknown
up-front (development) or runtime (poor performance)?
Should I then use C#, because it's safe and 5 times faster ? No, I don't
state it to be generally faster as C++. It isn't.
But I haven't expected it to be that much faster in writing integers to
a text file.

Andre
Mark Jacobs
2008-05-30 17:14:18 UTC
Permalink
Post by Andre Kaufmann
E.g. writing 500000 integers [a real world example would be to stream a
XML file) to a text file on my computer (Vista, C++) - although this
(code at the end of this post)
Use AnsiString instead :-

int ii; AnsiString str=""; FILE *fp;
for (ii=0;ii<500000;++ii) str+=AnsiString(ii);
fp=fopen("myfile.txt","wb");
fwrite(str.c_str(),sizeof(char),str.Length(),fp);
fclose(fp);

Now that is fast (<0.5 seconds in my tests!).
--
Mark Jacobs
www.jacobsm.com
Andre Kaufmann
2008-05-31 08:18:05 UTC
Permalink
Post by Mark Jacobs
Post by Andre Kaufmann
E.g. writing 500000 integers [a real world example would be to stream a
XML file) to a text file on my computer (Vista, C++) - although this
(code at the end of this post)
Use AnsiString instead :-
int ii; AnsiString str=""; FILE *fp;
for (ii=0;ii<500000;++ii) str+=AnsiString(ii);
fp=fopen("myfile.txt","wb");
fwrite(str.c_str(),sizeof(char),str.Length(),fp);
fclose(fp);
Now that is fast (<0.5 seconds in my tests!).
Thank you for the hint. The only problem is that AnsiString is not part
of the C++ standard and it can be used (unfortunately) only by BCB on
Windows.

But anyways a good example that C++ isn't generally fast, it depends on
the code / libraries itself ;-).

Andre
Duane Hebert
2008-06-02 19:45:45 UTC
Permalink
Post by Mark Jacobs
Post by Andre Kaufmann
E.g. writing 500000 integers [a real world example would be to stream a
XML file) to a text file on my computer (Vista, C++) - although this
(code at the end of this post)
Use AnsiString instead :-
int ii; AnsiString str=""; FILE *fp;
for (ii=0;ii<500000;++ii) str+=AnsiString(ii);
fp=fopen("myfile.txt","wb");
fwrite(str.c_str(),sizeof(char),str.Length(),fp);
fclose(fp);
Now that is fast (<0.5 seconds in my tests!).
Thank you for the hint. The only problem is that AnsiString is not part of
the C++ standard and it can be used (unfortunately) only by BCB on
Windows.
So use Qt 4.4 as the subject says <g>
Mike Margerum
2008-05-11 19:24:44 UTC
Permalink
Post by Andre Kaufmann
QT4 is a good framework, for sure. But the price of the commercial
version is >for me< too high, if the price I've found is correct.
And I don't know if Nokia is really interested in targeting all
it is pretty expensive but they do give a pretty significant discount
for <$200k companies. I'm going to need it for both CE and vista >.<

In the long run it will be well worth it for me.

I'm surprised QT doesnt have some kind of file abstraction of their own
that is highly performant.
Duane Hebert
2008-05-12 12:56:06 UTC
Permalink
Post by Andre Kaufmann
QT4 is a good framework, for sure. But the price of the commercial
version is >for me< too high, if the price I've found is correct.
And I don't know if Nokia is really interested in targeting all
it is pretty expensive but they do give a pretty significant discount for
<$200k companies. I'm going to need it for both CE and vista >.<
In the long run it will be well worth it for me.
I'm surprised QT doesnt have some kind of file abstraction of their own
that is highly performant.
I guess it depends on your POV but QFile and QFileInfo seem to do
pretty well here. And coupled with QTextStream and QTextCodec they
deal with Unicode text out of the box.

As to the price, while it's initially expensive, the quality and support
more than make up for it IMO.
Mike Margerum
2008-05-11 19:26:14 UTC
Permalink
Post by Andre Kaufmann
And I don't know if Nokia is really interested in targeting all
platforms (in the future).
I had the same concern btw. They just released a windows mobile port
which shocked the heck out of me and tells me they are serious about
leaving trolltech alone to port to every platform possible.
na
2008-05-12 14:48:16 UTC
Permalink
Post by Andre Kaufmann
But I wouldn't use pure C++ for cross platform programming too.
You would have to use the C++ standard library and IMHO the C++ iostreams library is quite unusable regarding
performance.
Nothing *std::ios::sync_with_stdio(false);* does not fix...
Andre Kaufmann
2008-05-12 15:03:52 UTC
Permalink
Post by na
Post by Andre Kaufmann
But I wouldn't use pure C++ for cross platform programming too.
You would have to use the C++ standard library and IMHO the C++ iostreams library is quite unusable regarding
performance.
Nothing *std::ios::sync_with_stdio(false);* does not fix...
Thank you. Although I don't understand why this should help in serially
writing to a file faster it's somewhat faster.
But still iostreams seem to be slower (in this simple example)

Andre
Darko Miletic
2008-05-12 16:04:01 UTC
Permalink
Post by Andre Kaufmann
Post by na
Post by Andre Kaufmann
But I wouldn't use pure C++ for cross platform programming too.
You would have to use the C++ standard library and IMHO the C++
iostreams library is quite unusable regarding performance.
Nothing *std::ios::sync_with_stdio(false);* does not fix...
Thank you. Although I don't understand why this should help in serially
writing to a file faster it's somewhat faster.
But still iostreams seem to be slower (in this simple example)
There is faster replacement

FASTreams
http://www.msobczak.com/prog/fastreams/
Andre Kaufmann
2008-05-13 05:05:34 UTC
Permalink
Post by Darko Miletic
[...]
There is faster replacement
FASTreams
http://www.msobczak.com/prog/fastreams/
Thank you very much for the link.

Andre
na
2008-05-12 18:46:10 UTC
Permalink
Post by na
Nothing *std::ios::sync_with_stdio(false);* does not fix...
Thank you. Although I don't understand why this should help in serially writing to a file faster it's somewhat faster.
But still iostreams seem to be slower (in this simple example)
Glad it helped a little. By the way this is how I implement it. UNTESTED just a quick type up.
Does this still run too slow on your system?

****************************
std::ios::sync_with_stdio(false);
std::ofstream outfile;
outfile.open("C:\\Temp\\File.dat", std::ios::out | std::ios::binary);

char buf[1024];

while (true)
{
iBytesRead = pSocket->read(buf,1024);
outfile.write(buf, iBytesRead);
}

outfile.close();
****************************
Darko Miletic
2008-05-12 20:40:40 UTC
Permalink
Post by na
Post by na
Nothing *std::ios::sync_with_stdio(false);* does not fix...
Thank you. Although I don't understand why this should help in serially writing to a file faster it's somewhat faster.
But still iostreams seem to be slower (in this simple example)
Glad it helped a little. By the way this is how I implement it. UNTESTED just a quick type up.
Does this still run too slow on your system?
****************************
std::ios::sync_with_stdio(false);
std::ofstream outfile;
outfile.open("C:\\Temp\\File.dat", std::ios::out | std::ios::binary);
char buf[1024];
while (true)
{
iBytesRead = pSocket->read(buf,1024);
outfile.write(buf, iBytesRead);
}
outfile.close();
****************************
What is missing here after opening file is this:
outfile.imbue(std::locale("C"));

By doing this we are making sure that no conversion is done whatsoever
on passing data.

This is very important especially if binary data are being processed.
Andre Kaufmann
2008-06-02 22:04:46 UTC
Permalink
Post by na
Post by na
Nothing *std::ios::sync_with_stdio(false);* does not fix...
Thank you. Although I don't understand why this should help in serially writing to a file faster it's somewhat faster.
But still iostreams seem to be slower (in this simple example)
Glad it helped a little. By the way this is how I implement it. UNTESTED just a quick type up.
Does this still run too slow on your system?
****************************
std::ios::sync_with_stdio(false);
std::ofstream outfile;
outfile.open("C:\\Temp\\File.dat", std::ios::out | std::ios::binary);
char buf[1024];
while (true)
{
iBytesRead = pSocket->read(buf,1024);
outfile.write(buf, iBytesRead);
}
outfile.close();
****************************
Sorry missed somehow your reply.

I don't know what pSocket is or should do in this context, I assume a
socket, from which text blocks are read with a granularity of 1024.
Yes, I assume it would be faster to write memory blocks, but single
integers. The point is, it should be the task of iostreams to buffer the
data and write it, if enough data has been buffered at once to the
wrapped file. Depending on th OS / file system it should choose the
optimum block granularity for doing this.

Andre

Thomas Vieten
2008-05-13 12:12:52 UTC
Permalink
Post by Andre Kaufmann
QT4 is a good framework, for sure. But the price of the commercial
version is >for me< too high, if the price I've found is correct.
But you don't need to buy components - because almost everything is there -,
you have the source code and a huge opensource ressource.
And it is easy to expand. And support is included !

T.
Continue reading on narkive:
Loading...