.NET Questions (CLOSED)

Questions and Answers on any aspect of .NET. Now closed.

This discussion group is now closed.

Have a question about .NET development? Try stackoverflow.com, a worldwide community of great developers asking and answering questions 24 hours a day.

The archives of .NET Questions contain years of Q&A. Even older .NET Questions are still online, too.

.NET desktop application performance VS Native apps

I am a Delphi native Win32 developer thinking (very lightly) about preparing for a move to .NET

However, I have heard all kinds of opinions on .NET performance.

I know that .NET is optimized for the hardware you run it on, unlike compiled software looking at the lowest common denominator.  I have also seen people do performance benchmarks against Delphi and shown it to run faster.

But, benchmarks are normally testing very specific things.  What I want to know is if anyone has take a small desktop application using pretty complex UI controls and created the project in both native and .NET and then looked at its overall performance including what most users see which is GUI responsiveness.  Anyone?  Maybe this is a good challenge for MS or CodeGear to demonstrate.  And more in CodeGears favor if native is that much faster then .NET

I have read that WPF is going to add more hardware assisted performance to the GUI than WinForms and I wonder if that will bring it in line performance-wise with native GUI desktop apps.

Also alot of myths of .NET speed seem to stem from opinions based on the older versions of .NET.  Has .NET 2.0 made significant performance improvements especially in the GUI responsiveness?  I know that the XML parsing libs were so enhanced that they now beat out the fastest Java XML parser and those are pretty fast.  Although it would be interesting to pit it against the native lib2xml ones to see how much faster native XML parsing is (both DOM and SAX).
Jeff C. Send private email
Sunday, March 11, 2007
 
 
.NET is very fast in terms of desktop performance. the problem is, you don't get this performance out of the box and many developers have no clue how to write high performance managed code. there are some technical articles on this subject on MSDN.

I'm sure all new applications Microsoft starts working on are pretty much written in C#. After all, .NET is currently the best platform for developing desktop clients on Windows.
lubos
Sunday, March 11, 2007
 
 
".NET is very fast in terms of desktop performance."

Is it?  .NET applications on my system render slow, take forever to start up, and are reminiscent of Java based GUI applications.  Perhaps if the .NET runtime were already loaded in memory at bootup, instead of being flushed all the time.


"the problem is, you don't get this performance out of the box and many developers have no clue how to write high performance managed code. "

Even MS based .NET dependent applications are slow.  SQL 2005's "SQL Server Management Studio" is a dog compared to SQL 2000's query analyzer.


"I'm sure all new applications Microsoft starts working on are pretty much written in C#. "

I would not be so sure.  You could be correct, but I suspect that C++ is still used for many things.


"After all, .NET is currently the best platform for developing desktop clients on Windows."

That's opinion.  I suspect .NET is useful in much the same way as VB was back in "the day": getting functional consistent UI  form-type applications up and running is quicker.  I'm not sure that is equivalent to "best".

To truly judge, the same application would have to be written in both native code and .NET, with identical functionality.  Then we would have to compare startup, execution, operation scripts.  We would also have to compare time to develop, bugs per line of code, etc.
Dan Fleet Send private email
Sunday, March 11, 2007
 
 
.Net apps are dog slow and yes the ssm studio is a good example of whats come out of ms written in .net it is also dog slow. If you take the pains to download small business accounting you will find its the same.

A crying shame but what can you do? Write it in C++ you'll be  wrestling with memory allocations till the cows come home. Delphi? Whats their way forward.

We need a credible platform for building desktop apps. It happens to be .Net at the moment and I'm not sure anyone's going to step up the plate and come up with an alternative. .Net is great but some parts are so slow its annoying but again, what can you do?

2nd runs are always faster than the first but the delay on the first run is just plain annoying even though you know whats going on you think "What the hell are you doing.  Render already"
gimmesomespeed
Sunday, March 11, 2007
 
 
I'm writing audio software - a cross between iTunes and ACID - using C#. The software includes highly computational work like FFTs, and highly graphical work like displaying waveforms.

The first prototype of this software was actually written in Delphi. I changed because I got scared (rightly or wrongly) about the future of that product in the days before the CodeGear changes occurred.

The Delphi version had faster graphical performance. Displaying and scrolling waveforms was more snappy, but this difference was only noticeable on my old development machine - a three year old 3.0Ghz P4 with 512K RAM. Now I'm runnin a Compaq nc8430 with Core Duo 2.16Ghz processor, and there's nothing noticeable.

Surprisingly, the computational performance was BETTER under C#. There is always a risk of differences introduced in code, but keep in mind that I was an experienced D7 user and a novice C# user. For comparison's sake, I re-coded the algorithms in question using VC6 and PellesC. The C# performance numbers were still better.

Another benefit of .NET over native development is that the executable/distributable program is an order of magnitude smaller than its native equivalent.

One thing I've learned about improving performance is to throw away your own implementations of algorithms and data structures, and use the .NET library components. Spend some time reading - spend a LOT of time reading - about the .NET framework, and then refactor your code to use .NET everywhere. You get a big performance improvement.

My conclusion? C# and .NET is a perfectly satisfactory toolset for desktop applications.
rond
Sunday, March 11, 2007
 
 
Take a look at this article regarding performance.

http://www.codeproject.com/dotnet/primenumbersprojects.asp

I realize it doesn't deal with the GUI aspects, but it does deal with the computational aspects of C#.
-Rob
Sunday, March 11, 2007
 
 
"You could be correct, but I suspect that C++ is still used for many things."

Isn't Visual Studio still written in C++?  If so, I wonder when the day will come when MS begins to eat their own dog food and write the whole IDE in C#.  I bet that day never comes.  The Delphi IDE is written in Delphi, well actually there is also some .NET code in there now, but Delphi 7 was full Delphi native Win32 code.

Is MS Expression designer fully written in C#?  I know it will be doing alot of GUI rendering, but then again it will be with the WPF Framework which I beleive makes use of graphics card hardware.  I think WinForms uses GDI.

"Now I'm runnin a Compaq nc8430 with Core Duo 2.16Ghz processor, and there's nothing noticeable"

That makes sense because of .NET always taking use of the better hardware, where as Delphi is compiled for the lowest common denominator.  So I guess its safe to say its just a matter of time for .NET to be the best choice, that is, as .NET runtime improves and  everyone starts using higher powered hardware.  I also have a feeling Intel and AMD will start optimizing processors for JIT compilers if they have not already in the new cores.

It would be interesting to see what the lowest common denominator of hardware is for apps that perform on par with both native and .NET

So I guess if I do continue to develop in Delphi native, that it is probably smart to choose 3rd party software that has .NET support so code will not have to be rewritten to port.

Here is a question:

I know that you can call .NET assemblies from native code but there is COM overhead since interop is done using COM.  So is that overhead on every call or just on one initial load up front? 

In one part of my program I do alot of parsing of Amazon XML using XPath and I am wondering if I can get almost the same performance from .NET XML libs versus the wrappered lib2xml Delphi ones I use (once past the intial connection to the assembly).  Of course, if there is overhead on every call then maybe its best I stick with the native stuff for awhile.  But I want to see if this will give me a good opportunity to learn the .NET XML framework libraries.
Jeff C. Send private email
Monday, March 12, 2007
 
 
If you give a competent C# programmer and a competent C++ programmer the same task and time constraints, the C# program will be faster and more reliable. You will also find it easier to find the skills required if you need to expand to more than one developer.

However, if the C++ programmer is good enough and is given enough time, I don't doubt the program can be made faster.

For anyone who hasn't seen it, this is a a great read:
http://blogs.msdn.com/ricom/archive/2005/05/10/performance-quiz-6-chinese-english-dictionary-reader.aspx
Adrian
Monday, March 12, 2007
 
 
".NET is very fast in terms of desktop performance. the problem is, you don't get this performance out of the box and many developers have no clue how to write high performance managed code."

QFT.  Anyone who still thinks that .NET is slow needs to download Paint.NET right now: http://getpaint.net/index2.html

That's a GUI-heavy application written in 100% C# -- you can download the source code if you don't believe it.  Yet it has a snappy GUI and is way faster than Adobe Photoshop, for example.

What did the authors NOT do?  Rely on the Visual Studio "designers" to create their GUI for them, of course...
Chris Nahr Send private email
Monday, March 12, 2007
 
 
What did the authors NOT do?  Rely on the Visual Studio "designers" to create their GUI for them, of course...

Doesn't that defeat the purpose of using a RAD tool or IDE.  So to make it fast I have to code the GUI from scratch, yikes.
Jeff C. Send private email
Monday, March 12, 2007
 
 
It really, really depends on what kind of GUI you need to build.  For all of the development I've ever done, the tools presently in VS.NET produce excellent and fast code.  You still have to figure out how rendering works, and when to build UserControls, and what costs docking imposes, and so on, but the cost of learning those things is small compared with the innumerable benefits.
Robert Rossney Send private email
Monday, March 12, 2007
 
 
"Doesn't that defeat the purpose of using a RAD tool or IDE."

The WinForms designer of Visual Studio is not the sole purpose of the .NET Framework.  .NET is not just an IDE backend like VB6 was, and VS is also much more than just a .NET control designer.

Also, people made GUIs and desktop applications in other languages for many years before there were any RAD tools around to help them.

Coming from a C++ background myself I don't see why the limited usefulness of the Visual Studio designers is such a big deal.  In fact I kind of expected it. ;)
Chris Nahr Send private email
Tuesday, March 13, 2007
 
 
Sez JeffC:

> I know that you can call .NET assemblies from native code
> but there is COM overhead since interop is done using COM. 
> So is that overhead on every call or just on one initial
> load up front?

There are two sources of overhead:  the one-time cost of loading your assembly and the continuing cost of marshalling data between managed and unmanaged code.  The former is small and happens once; the latter is likely to be unnoticeable unless you're making thousands of calls.

> In one part of my program I do alot of parsing of Amazon
> XML using XPath and I am wondering if I can get almost
> the same performance from .NET XML libs versus the
> wrappered lib2xml Delphi ones I use (once past the
> intial connection to the assembly).

I can't compare it meaningfully to Delphi's library, but the XPathDocument class in the .NET Framework is fast as hell.  On large XML documents it executes queries perceptibly faster than the MSXML DOM did. 

I do a ton of XML programming, and now that I have my head around the .NET 2.0 framework I'm finding that I almost never use the XmlDocument class.  I use XmlWriter to create XML documents, and XPathDocument, XPathNavigator and XPathNodeIterator to read them.  The difference in performance is striking:  most of the time you don't need all of the features of the fully-implemented DOM, and by doing without them you avoid incurring overhead for things you aren't going to use.
Robert Rossney Send private email
Tuesday, March 13, 2007
 
 
Thanks for the .NET XPath Info, I will look into this.

I found these articles on .NET Application performance tips:

Application Responsiveness
http://www.ddj.com/dept/windows/192700235?pgno=1

Practical Tips For Boosting The Performance Of Windows Forms Apps
http://msdn.microsoft.com/msdnmag/issues/06/03/WindowsFormsPerformance/

If any have others, please post them here.  Thanks.
Jeff C. Send private email
Tuesday, March 13, 2007
 
 
Someone above cited Paint.Net as an example of fast and efficient .Net application, though in my experience it's more a counter-example, Paint.Net is anything but fast when working on digital camera images (6 Mpix plus), and with a couple pictures opened, it's already capable of bring even high end machines to their knees memory-wise.
It is decent when working with small images and certainly more than capable of handling icon-sized images, but for anything larger, it's just /not/ an application you want to cite as an example of "good" .Net application.
It's better than Paint, and it's free, but that's about it.
John Send private email
Tuesday, March 13, 2007
 
 
I don't have any opinion regarding actual differences of speed, whether they exist or not or to what extent they exist. 

I do have a comment on all this though.  In my experience the typical sort of custom desktop applicqtions people build (at least the ones I build) have little problem with speed of underlying calculations.  Differences in speed in this area would make a huge difference on a server, but when you've got an entire machine's cpu devoted to running a simple desktop app, speed of underlying calculations is often not really an issue at all; you could use almost any language and be fast enough.

On the other hand, speed of things like opening a new form, graphical updates, etc. is something any user will notice.  The difference may seem trivial, it may be purely a problem with user "perception".  But it is a problem.  Fast graphical updates make an app that feel snappy and responsive to a user, slower updates make the app feel sluggish.

From many things I've read my fear is that .NET still, after all these years, give users the impression that they're slow, because the form opening and graphical updates are noticeable slower than with native code.

Is there general agreement on this?  Or is this a case where it's sometimes true, sometimes not.
Herbert Sitz Send private email
Tuesday, March 13, 2007
 
 
Apologies for all the grammatical errors in message I just posted.  I'm not an english-as-a-second language speaker.  Really.  I was just jumping around in the message as I wrote it, editing here and there, and I did an incomplete job of it.
Herbert Sitz Send private email
Tuesday, March 13, 2007
 
 
" Someone above cited Paint.Net as an example of fast and efficient .Net application, though in my experience it's more a counter-example ..."

I will totally agree with this opinion. When working with large size images performance completely sucks. Filters speed is even worse. I compared performance of Paint.NET filters with the native ones ones (I used Delphi for native implementations) and it seems that speed of Paint.NET filters is much worse.

And just for fun. Try opening and editing this image using Paint.NET:

 http://www.exsotron.com/exs_pics/bucket/001370_2.jpg

WARNING !!! jpeg size is 15MB

This image is produced by my real-time music visualization software. So photoshop is perfectly capable of handling it and Photo.NET refuses even to open it.

So I think that .NET is suitable for producing some type of applications in satisfactory manner but as a general purpose tool that can cover wide spectrum of applications it is totally inadequate
Kostya Poukhov Send private email
Tuesday, March 13, 2007
 
 
> So I think that .NET is suitable for producing some type of
> applications in satisfactory manner but as a general
> purpose tool that can cover wide spectrum of applications
> it is totally inadequate

Or maybe it just a minor implementation detail that causes Paint.NET to choke on larger files.  Until the cause is clearly identified as a .NET problem and not a design/implementation issue with the application, one cannot make the assertion that .NET isn't appropriate to develop this kind of application with.
cipher
Tuesday, March 13, 2007
 
 
>Or maybe it just a minor implementation
>detail that causes Paint.NET to choke
>on larger files.

No it is not. You also skipped part about filters. Put it this way, when I do native stuff I concentrate on problem itself. When I do .NET stuff I have tons of problems with optimizing performance. It is just unproductive. Not everything is bad there, as was mentioned before some math performs just fine, also many parts of many apps are not performance sensitive. My complaint is that it sucks as a multipurpose general tool when I want guaranteed results
Kostya Poukhov Send private email
Tuesday, March 13, 2007
 
 
"We need a credible platform for building desktop apps. It happens to be .Net at the moment and I'm not sure anyone's going to step up the plate and come up with an alternative. .Net is great but some parts are so slow its annoying but again, what can you do?"

Man you people are pretty uninformed, there is a credible platform right under you noses, but you all insist on wearing the M$ Blinders.

The best way to develop desktop applications is CodeGear Delphi!!!!!!!!!!  Take off the darn M$ blinders and take a look at Delphi, it's been doing the same stuff as .net for the last 12 years.

What more can I tell you than to try out Delphi?
Bob Send private email
Tuesday, March 13, 2007
 
 
Bob, didn't CodeGear jumped on .NET bandwagon as well?

Anyway, some people here mentioned large photos and slow performance of Paint.NET. of course this can be fixed. When comparing to photoshop, please keep in mind that Paint.NET is really young product with a lot of potential to optimize still ahead.

Managed code is capable to be as fast as native code, it just that you need to do extra work to achieve this which is OK because native code can be sometime really nasty time sucker as well.
lubos
Wednesday, March 14, 2007
 
 
>When comparing to photoshop, please keep in mind that
>Paint.NET is really young product with a lot of potential
>to optimize still ahead.

Maybe, but people keep citing it as an example of efficient .Net application, while it's so painfully obvious to anyone with experience in the graphics field that it's not up to the job. It's nice for a student level application, don't get me wrong, it's a good knowledge demo, but it's far from rivaling other graphics applications, or what many graphics libraries give you "out of the box" for that matter.

There is a urban myth in the development world that everything can be optimized later on, as if that was the necessary conclusion of the "don't optimize early" rule. It's not. In practice, code that hasn't reached a level of performance that is in the same ballpark as its competitors by the time of the first working release, is code that will most likely never reach a competitive performance level (short of a complete rewrite, as opposed to mere tweaking/refactoring).
John Send private email
Wednesday, March 14, 2007
 
 
This is getting ridiculous.  So because Paint.NET, a freeware application created as a student project, can't handle big files and isn't a full superset of Photoshop this somehow proves that .NET can't do desktop applications?

I was pointing out that Paint.NET has a nice responsive GUI which .NET FUDists claim isn't possible.  The program is faster than Photoshop for those image files that it can handle.  The accidental limitations of the underlying image manipulation engine are neither here nor there when we're talking about the general suitability of .NET for desktop apps.
Chris Nahr Send private email
Wednesday, March 14, 2007
 
 
Lubos :

AFAICT, CodeGear seem to have (at least partially) jumped back off the .NET bandwagon (thankfully) - there seems to have been the belated realisation by them that folk who want to go down the .NET route pretty much always end up going straight to MS anyway. IME, the folk that stick with Delphi do so because they want something different from what MS are offering, for whatever reason. That is the trough Codegear should (and hopefully will) be ploughing from now on...
Lurkio
Wednesday, March 14, 2007
 
 
Chris Nahr :

YOU were the guy making claims about Paint.NET being "way faster than Photoshop", you shouldn't be surprised when folk call you out on it. As Jeff C. stated up the thread, the need to circumvent the Visual Studio designers in order to get something approaching snappy performance doesn't exactly stand as a shining example of the alleged startling RAD productivity boost .NET kool-aiders claim for their development platform...

Funnily enough, I'd agree with you that Windows Forms (from v2.0 onwards) is certainly adequate in terms of performance for general purpose desktop applications...but that's about as far as I'd go in praising it. If WPF takes off in the next few years, Windows Forms is going to look old very quickly IMO...
Lurkio
Wednesday, March 14, 2007
 
 
I thought Chris had a good point about the UI being very snappy.  Who cares if you need to do some extra work to get a better UI in .NET?  I've been doing extra work for years in C++/MFC to get a better interface.  .NET seems to offer many goodies out of the box and if some tweaking is necessary, no biggie from my standpoint.

For a student project, I'm actually pretty impressed.  It's a nice little program for edited some smaller images.  Under Vista 64 it actually handles bigger images with ease.
-Rob
Wednesday, March 14, 2007
 
 
Go to
http://dada.perl.it/shootout/
and see your self.
Keep in mind that the examples are a little outdated.
Delphi 7 was in 2002.
Delphi Send private email
Wednesday, March 14, 2007
 
 
Frankly, compared to building GUIs with C++/MFC, virtually anything would seem like RAD to you :-)

As for Paint.NET - nice little student demo, reasonably snappy GUI, et cetera but really...BFD. What does it prove? With tweaking, you can write .NET apps that feel just about as responsive as those written in VB or Delphi five years ago?
Lurkio
Wednesday, March 14, 2007
 
 
And if you really want to see something impressive then look at
http://www.kanzelsberger.com/pixel/

It's done with freepascal an available for a number of platforms. And it's done by a one-man show.
Delphi
Wednesday, March 14, 2007
 
 
Lurkio - I understand. 

"Frankly, compared to building GUIs with C++/MFC, virtually anything would seem like RAD to you :-)"

To be quite frank, I went the Delphi route for a while.  You can still spot a Delphi app pretty easy.  It always has a "strange" look compared to Microsoft development based products. 

I've been around the game long enough to see the various tool wars.  It's all quite funny when you think about it.  We all have all formed our own biases and try and force those upon others, even when we really have no experience using those tools.

If you would have asked me about using C# and .NET a month ago, I would have laughed and said, C++/MFC is the only way to go.  I no longer feel that way.  I'm more and more impressed with C# and the whole .NET framework for the software I'm developing.  Just my 2 cents.
-Rob
Wednesday, March 14, 2007
 
 
"Bob, didn't CodeGear jumped on .NET bandwagon as well?"

They still have a kick ass native Delphi version as well as Delphi.net and C#.
The Delphi 2007 version that is going to be released soon is win32 native only with full support for Vista.
They will be updating the Delphi.net and C# parts of the CDS Studio later this year.

Check it out:  http://www.codegear.com

Every one is saying how great .net is but the fact of the matter is many of the cool things you can do with .net have been in Delphi since 1995. If all the VB 6 people had switched to Delphi instead of C#/vb.net the world would be a much better place. 
Also by sticking with MS tools you are simply padding the pockets of Bill and Steve. I for one don't want to make those assholes any richer than they already are.

Also if anyone is looking for a good replacement for M$ SQL server be sure to check out PostgreSQL at http://www.postgresql.org
Bob Send private email
Wednesday, March 14, 2007
 
 
I'm sure Delphi is on desktop just as good as .NET framework if not better. I've heard many good things about this platform but the point is that .NET is getting wider acceptance across the market.

If you are a start-up company, then it makes a lot more sense to develop in .NET for many reasons. for example

- more developers, useful if your application exposes public API
- ASP.NET in case you build smart clients, you might want to host some DLLs on webserver and easily work with webservices
- Microsoft is stronger player than CodeGear. This guarantees your investment into codebase will be supported for at least 10-15 years.
lubos
Wednesday, March 14, 2007
 
 
>> If you are a start-up company, then it
>> makes a lot more sense to develop in .NET
>> for many reasons.

The first reason is Empower-ISV ($375) vs. Delphi ($899). I get the operating system licenses I need for testing plus gobs of other useful software. I considered CodeGear but I am unaware of any developer package like Microsoft's Empower-ISV to reduce the startup costs (and be legit for the IDE at least).
Ken Brittain Send private email
Wednesday, March 14, 2007
 
 
Yes, Empower ISV is cheap, but they make you jump through hoops to get it:
https://empower-isv.one.microsoft.com/isv/programguide/Requirements.aspx
you could also become a CodeGear Technology Partner and get all the CodeGear tools for free.
Not to mention that ISV program is basically there to crush other companies like CodeGear.
Do your self a favor and take off your M$ Blinders and stop padding the pockets of Bill and Steve.

M$ gives stuff away or very cheaply simply to crush the competition.

Also you can get the Turbo Pro version of Delphi for 349 I believe and it the full deal.  They also have subscriptions available that are much cheaper if you keep up with it.

CodeGear is doing things differently now so give them a look.
Bob Send private email
Wednesday, March 14, 2007
 
 
"I'm sure Delphi is on desktop just as good as .NET framework if not better. I've heard many good things about this platform but the point is that .NET is getting wider acceptance across the market."

Again this is pure FUD.  You can still use Delphi and have a guaranteed upgrade(or in the case of .net downgrade) path to .net with Delphi for .net.  M$ didn't provide that option to the hordes of VB 6 developers out there, instead they gave them VB.not.  Things are not as bright as you make them out to be by going with M$.

They also tie you to THEIR platform, ASP.net only runs on the crappy IIS server (Mono does not cut it yet), you are forced to run MS SQL server on windows, forced to do this and forced to do that.

Also the argument of more developers is FUD as well.  Sure there are more developers but any time you have a huge number of something a good percentage are not really that good.
Bob Send private email
Wednesday, March 14, 2007
 
 
Some interesting comments, but lets keep the topic on track.  This is not a Delphi Win32 vs .NET thread. What I really wanted to get was a feel for people developing applications and their opinions of the performance of .NET vs native apps in terms of real world applications (GUI, etc) and not benchmarks.

Its seems that if you are a software developer providing products like FeedDemon, Skype, etc then native code is the best bet since your audience is too varied and have many different hardware requirements. 

However, if you are developing internal applications for a company and you can control the baseline hardware requirements then .NET is probably a better choice.

For me, my application is a software product (DVD Organizer) and thus native is probably my best choice.  I want to also have thumbnail display of DVD covers similar to Delicious Library and to achieve good rendering/scrolling performance of DVD thumbnail covers, I feel native code can achieve this better.  Yes, there is WPF but on Windows XP its performance is not going to be very good and a large part of my audience will be XP.
Jeff C. Send private email
Wednesday, March 14, 2007
 
 
"However, if you are developing internal applications for a company and you can control the baseline hardware requirements then .NET is probably a better choice."

I would have to disagree with this since it would only apply to native as in C++/MFC etc.

You are forgeting that Delphi is just as native as a C++ app and can do anything .net can do.  Sure SOME things are easier to do in .net, but for a medium to large company you would save money by using something like Delphi over .net.  For most applications especially database driven ones it takes far less code in Delphi, not to mention Delphi has things like datamodules and the ability to see data in grids at design time.  With Delphi you can add a data module to your application, add a image list, then add the datamodule to the uses of whatever form you need to use the image list on and you can do it all at design time.  Last time I used C# I had to assign a image to every button and if I passed the image to each control at run time I could not see what the buttons looked like at design time.  These are small things, but they sure do make your life easier.
Bob Send private email
Wednesday, March 14, 2007
 
 
As with all things, the tool you need to use is based on what you are trying to do. I don't consider .NET very good at image processing and neither does Pegasus Imaging. See this white paper on their .NET tuning for an interesting discussion of how a tool vendor optimized their toolset for .NET:

http://www.pegasusimaging.com/pic_net2_white_paper2.pdf

A while back I looked at .NET for some of the things I need to do and noticed some strange memory allocation behavior. You may want to have a look at this:

http://www.folding-hyperspace.com/program_tip_11.htm

I also looked at the speed of various container classes in Delphi, .NET and Qt here:

http://www.folding-hyperspace.com/program_tip_14.htm

Qt on the VS2005 C++ compiler had the fastest containers for basic operations and sorting.

I have yet to figure out how to process an image and then display it on the screen quickly with .NET. It possibly could be done with streaming it into a graphic object, but .NET makes the raw graphics data very hard to get at (only one pixel at a time). On Delphi simply use TImage and on Qt use QImage, QPixmap and QGraphicsView. Check out the Qt 40,000 chips example if you want to see real graphics power. I guess I will need to look at the PaintDotNet app source and see how they got .NET to do it.

Both Delphi and .NET have their place, you just need to look at what you are trying to do and see which one is the better fit.
James Gibbons Send private email
Saturday, March 17, 2007
 
 
Just a minor update. Now that I have looked at Paint.NET, I found out how they move image data around. Image processing is done in a "Surface" object which is the raw array of pixels and it is quite easy to get access to the raw Bitmap image pixels. I never noticed it in the documentation, but Bitmap.LockBits gives access to a BitmapData object and this contains Scan0 which is a raw pointer to the image bytes. The example for Scan0 uses System.Runtime.InteropServices.Marshal.Copy to copy the raw bytes out and into the Bitmap.

It is so easy to miss what you want in the .NET documentation. The Qt documentation is better organized and they include good discussions on how to use each class, something usually missing from .NET documentation unless you drill down into the method descriptions, but if you don't know what you're looking for it is kind of hard to figure out where to start drilling. Stob did a great parody of MS documentation on The Register last week and it exactly fits this situation.

My experience with Paint.NET: it won't load the large JPG referenced above on my laptop, even with 2GB it throws an out of memory error. To hold a 22 MB image it will use over 100 MB so memory usage is not very efficient. Running a Gaussian filter on this size image is much slower than a native application I use.

Gaussian Blur 10 pixels in Paint.NET about 10 seconds.
Gaussian Blur 10 pixels in ArcSoft PhotoStudio about 3 seconds.
James Gibbons Send private email
Saturday, March 17, 2007
 
 
BitmapData Description from .NET documentation:

(start)
Specifies the attributes of a bitmap image. The BitmapData class is used by the LockBits and UnlockBits methods of the Bitmap class. Not inheritable.
(stop)

Not a word about access to the raw image bytes!
James Gibbons Send private email
Saturday, March 17, 2007
 
 
And here is the Verity Stob article:

Unhelpful Microsoft help denies helpless millions help

http://www.regdeveloper.co.uk/2007/03/08/msdn_gloom/
James Gibbons Send private email
Saturday, March 17, 2007
 
 
Update on Paint.NET: A search for the "unsafe" keyword shows that it is used 200 times in the code and in the effect routines. From the C# MSDN docs:

<Quote>
In the common language runtime (CLR), unsafe code is referred to as unverifiable code.

If you use unsafe code, it is your responsibility to ensure that your code does not introduce security risks or pointer errors.
<unQuote>

Paint.NET is not a pure .NET application and relies on the programmer to verify security if that is even possible. In addition to being slower than native code, it really isn't a true "sandbox" .NET application.
James Gibbons Send private email
Sunday, March 18, 2007
 
 
The thing is that C#/.NET has a certain amount of momentum behind it.  Delphi may be a better choice for desktop development.  I don't doubt that.  It isn't so much about programmers having MS blinders on as it is about the employment opportunities with .NET vs. Delphi.  If there are more .NET programmers than Delphi programmers then more apps will be written in .NET.  A larger .NET code base means more employment opportunities for .NET programmers.  The cycle continues.

Also, I doubt most developers use .NET so their app will run in a sandbox. It probably has more to do with the productivity gains.
cipher
Sunday, March 18, 2007
 
 
James Gibbons, have you ever written any C#?

The purpose of the unsafe{} block is to make intentionally "unsafe" actions explicit. The term "unsafe" doesn't mean "insecure", it means "stuff that can sometimes cause problems when used by inexperienced programmers". The most common examples of unsafe code are calls to unmanaged DLLs (often required to work with 3rd party legacy libraries/controls), and good old-fashioned pointer operations.
rond
Monday, March 19, 2007
 
 
James is also wrong in assuming that the use of "unsafe" means that Paint.NET is not a "pure" .NET application.  The keyword is used to temporarily suspend the .NET garbage collector and type checking for specific variables, but those variables are still managed by the .NET runtime system, and the "unsafe" code is still executed by the .NET runtime system.

It's true that Paint.NET isn't fully "sandboxed" like a web application would be, but last I checked that wasn't a requirement for desktop applications.  Besides, a .NET application with some "unsafe" code is still much safer (regarding issues like buffer overflows) than pure native code which is 100% "unsafe" by definition.

Then again, who cares about relevance and common sense in this free-for-all FUD thread?  Let's invent some more bullshit reasons why .NET can't make desktop apps!  I bet the desktop icon for Paint.NET looks somehow inferior to your favorite native code app, too!
Chris Nahr Send private email
Monday, March 19, 2007
 
 
- Microsoft is stronger player than CodeGear. This guarantees your investment into codebase will be supported for at least 10-15 years.

How about Visual Basic 6?
blur
Monday, March 19, 2007
 
 
"stuff that can sometimes cause problems when used by inexperienced programmers"

Exactly my point rond. Paint.NET allows third party plug-ins and the plug-ins look like they must use "unsafe" to do their work.

James Gibbons, have you ever written any C#?

Yes, I wrote just enough to find a bug in the garbage collection in .NET 2.0 and haven't bothered writing any since.
James Gibbons Send private email
Monday, March 19, 2007
 
 
1. Seriously, you don't have any idea what "unsafe" means if you think it refers to the plug-in architecture.  It's a language feature that's orthogonal to extensibility.

2. My crystal ball says the bug was in your code, not in the .NET 2.0 garbage collector.  Newbies on a platform always claim they've found "bugs" in the compiler, library, runtime system, etc. but that's very rarely true.  Usually it's just their lack of understanding of the new platform that caused a bug in their own code, or an error in their understanding of the results.
Chris Nahr Send private email
Tuesday, March 20, 2007
 
 
Jeff asked for an example. Well, search Google for "act 2005 performance problems" and you will get some results. Act 2005 upgraded from Win32 and a flat Win32 file locked data store to .NET 1.1 and SQL Server. Newer versions running on .NET 2.0 seem to be a little better. Our company got hit by this mess. Computers that ran the old ACT fine with 256 MB now needed upgrades to even run Act 2005. We had to put in a Win Server 2003 box to support it. The SQL Server data store was also a problem because they wouldn't allow outside access so backup software couldn't use SQL Server's backup features. Ask anyone who has gone through this upgrade and you will get a load of opinions on .NET performance.

Oh, and Chris: there is nothing wrong with the program. The garbage collector will eventually kick in and reclaim the memory, only after there is no physical memory remaining and at that point the whole computer is running very slow. If .NET GC is so great, why are there so many articles on how to get it to work right?
James Gibbons Send private email
Tuesday, March 20, 2007
 
 
Chris, search Google for "NET garbage collector problems" and you will find this:

http://weblogs.asp.net/pwilson/archive/2004/02/14/73033.aspx

I would rate Paul Wilson as an experienced programmer. It appears he ran into same problem I did.
James Gibbons Send private email
Tuesday, March 20, 2007
 
 
And concerning "unsafe" Chris, on

http://www.codersource.net/csharp_unsafe_code.html

they state that by using "unsafe" you can:

<quote>
    *  Overwrite other variables.
    * Stack overflow.
    * Access areas of memory that doesn't contain any data as they do.
    * Overwrite some information of the code for the .net runtime, which will surely lead your application to crash.
<unquote>

Taking this source as accurate, and I'm not sure these statements are 100% correct, anyone writing plug-ins for Paint.NET and using "unsafe" can have a lot of fun with your computer.
James Gibbons Send private email
Tuesday, March 20, 2007
 
 
Repeatedly stating that code marked as "unsafe" can be, in fact, unsafe still doesn't say anything about the unrelated issue of plug-ins, nor does it make .NET any worse than inherently unsafe native code.

The Wilson article does not describe any "bug" of the GC, it's merely discussing its behavior and giving usage tips.  Your claim that the GC only reclaims memory when no physical memory is left is simply untrue.  You evidently forgot to clear references to large objects which is also what Wilson was talking about.

You think it's problematic that you need to learn something about the system you're going to program?  That says a lot about you, but nothing about the system.

Anyway, you're clearly just trolling here -- googling for random pages and pretending they support your baseless claims when in fact they do nothing of the sort.  This is a waste of time, and I'm done with this thread.
Chris Nahr Send private email
Wednesday, March 21, 2007
 
 
Well Chris, I'm sorry you think of me as a troll. Considering that about half of your messages are aimed at attacking my integrity rather than providing a logical analysis of what I was saying shows who the troll is in my opinion.

I'm also sorry if you got hung up on the plug-in subject, but where do most of the bugs come into systems? They come in as ActiveX and other script based malware through Internet Explorer (yes, I look at our company AV logs and that is where most of it seems to come from). I was simply pointing out that letting someone you don't know write a plug-in and then installing it on your computer in an application that bypasses the .NET sandbox might not be a good idea. Perhaps I didn't make it clear enough.

If you carefully read the Wilson article, you would have noticed that setting variables to null after use didn't fully solve the problem he was having. In my test application there is a check box that causes GC.Collect() to be called each time I remove an array from the list. If this happens, memory gets released from the working set of the .NET process just like it does when you delete large arrays in C++ native mode. Wilson could have fixed his problem to an even better level if he simply called GC.Collect() after releasing large blocks. If you don't force garbage collection, then .NET applications can use up all of physical memory and this leads me to call it a bug, but it appears that .NET programmers prefer to call it a feature. Guess I just set my standards too high. I found it rather amusing that Wilson could only support half the users running on .NET as he could using the older Access application, but he still highly recommended .NET for future work. Let the IT gods rule!

And I might add that how .NET is installed on a machine appears to change the way the garbage collector works. I reran the test on my laptop and could not get it to use up all of physical memory by simply adding and deleting memory from the list like I was able to in the tests shown on my web page. The only thing I can think of is that I installed VS2005 on my laptop and this may have made some changes to the memory tuning for .NET. This is perhaps why not everyone has been able to repeat my results.
James Gibbons Send private email
Wednesday, March 21, 2007
 
 
I have emailed the author(s) of Paint.NET and asked them to respond to this thread.  I think its best we don't make any assumptions until we get solid information on if Paint.NET does indeed use native code to help its performance in some areas or in plug-ins.  It seems code marked unsafe can have different meanings in its context.

As I mentioned in the beginning of this thread, it would be very cool if someone like MS or CodeGear could build a benchmark application in both native and .NET technologies so that a true "application" comparison could be made.  Then we could look at what the baseline hardware is for .NET apps to perform at native like speed in both internals and GUI.  The DevExpress Grid is used in alot of Apps and since they have both VCL (native Delphi) and .NET versions, this would probably be a good choice to put the GUI thru real world development tests.
Jeff C. Send private email
Friday, March 23, 2007
 
 
Found this link, looks like a good read:

Improving Managed Code Performance:
http://msdn2.microsoft.com/en-us/library/ms998547.aspx
Jeff C. Send private email
Friday, March 23, 2007
 
 
Ok somehow I've been dragged into this conversation, at the request of Jeff :)

The big question that everyone is bickering about is whether Paint.NET is a "native” or “pure” .NET application. I'm not really sure that everyone is in agreement over what that even means, so I'll just state the facts. That way, while you're reading my post and forming your own conclusions and opinions, I can be off drinking beer and hanging out with girls.

* Paint.NET has not been a student project for more than 2 years. It is maintained and updated primarily by myself, with contributions from Tom Jackson. We both work for Microsoft. The last “student” release of Paint.NET was version 2.0 (not 2.1, or 2.5, or 2.6, or 2.7, or 3.0). So please do not call it a student project.

* PaintDotNet.exe and its DLL dependencies, at least the ones we've written, are all written in C#.  There is no “native” code (C++ or otherwise) in there that we wrote.

* All system-dependent code resides in PaintDotNet.SystemLayer.dll, which makes heavy use of P/Invoke so that it can access a lot of Win32 API functionality that the framework doesn’t expose. This makes extensive use of a built-in .NET runtime feature. Some people think this makes Paint.NET less of a “pure” .NET application. I personally don’t care: my end-user target is … tada … a Windows user. Segregating all the system-specific code in to 1 DLL serves to keep my code cleaner and better organized, and provides someone *else* the opportunity to have an easier job of porting if they want (like Miguel de Icaza).

* Paint.NET has a shell extension that provides thumbnails to Explorer for *.PDN files. This is written in C++ using COM, and registered at installation time.

* Before the setup wizard starts, a small exe called SetupShim is first run. Its job is to determine if .NET is installed yet. If it is, it launches the wizard.  If it isn’t, it either launches the .NET installer if available, or presents an error dialog. This SetupShim is written in C. If .NET isn’t installed, you can’t run a .NET executable, simple as that.

* Getting back to the application, "unsafe" code blocks are used in areas where it's more natural or performant to work with pointers on bitmap data.

* However, unsafe code is not a requirement for writing a plugin (re: James Gibbons comment). .NET provides for Code Access Security but we have not made use of it. We might in the future, so that plugins do not get UnmanagedCode permission unless you explicitely allow it.

* Paint.NET's image processing is *not* built around System.Drawing. It did not have all the functionality or performance for what we needed to do. We have our own native Surface class for allocating a bitmap, which then takes into account *our* requirements for allocation, row ordering, pixel format, access to pointers, etc. There is the ability to alias a System.Drawing.Bitmap to a Surface so that you can then use System.Drawing/GDI+ to draw on to a Paint.NET surface (this is used in, for example, the shape drawing tools).

* As a corollary of some of the above, all the filters (effects) in Paint.NET are written in C#. We have not spent a lot of time optimizing the heck out of them. We have never had a goal of outperforming Photoshop in this area, and frankly we don’t have the resources to do that anyway.

* The fact that Paint.NET "barfs" on large images is a design/architecture issue. It is *not* a .NET issue. Applications like Photoshop and GIMP essentially use custom memory managers whereby a bitmap is split in to smaller tiles (for ex., 128x128 pixels). All the image processing code must then be aware of this and work with these tiles using a check-in/check-out system. This is great for memory use as you only NEED to have enough memory for the tiles that are currently active. It also imposes better memory locality. However, it imposes a tax on developers and is a bug hazard. Paint.NET always keeps the entire bitmap in memory, and each bitmap is created with 1 large allocation. This was a conscious design decision on my part way back in Paint.NET v1.0 because I didn't want to have to worry about writing code all over the place for requesting tiles, doing offset calculations, and submitting tile changes. AND I didn't want the other developers to be stumbling over this stuff. That's a big part of what has enabled Paint.NET to rapidly evolve and add new features and to keep bug counts low. However, going forward this is probably going to have to change, for many reasons that are outside the scope if this posting (plus I made the sorta-mistake of hoping that 64-bit adoption would go faster than it actually has). Some people seem to think that System.Drawing was designed to enable any imaging scenario imaginable, from 16x16 icons all the way to 15-layer, 30 megapixel superimages. This is not true, and any time you have very specific performance and memory use requirements (such as is the case with Photoshop, GIMP, Paint.NET, et. al.) you’re going to need to write a lot of custom code. That’s just a simple fact.

* People have stated that Paint.NET's filters (e.g., Gaussian Blur) are "slow." This again is not a .NET issue and is a design/architecture consequence. In Paint.NET I designed the effect system to reduce the implementation complexity for plugin authors, and I wanted all effects to get 5 things for "free": rendering previews for configuration dialogs, progress reporting, cancellation, clipping to a selection region, and multiprocessor scaling. To do this, the effect rendering harness in Paint.NET imposes a lot of control over the rendering workflow, whereas many other applications tell the effect/filter to just go off and do its thing and then rely on it to report progress through a callback, and to do its own clipping, and its own multithreaded scaling (if any). So, one of the reasons that Gaussian Blur is "slow" in Paint.NET is that it cannot always recycle some calculations between rows that are rendered because it cannot assume what order the rows are being rendered in (multithreading), nor does it even have control over which rows are rendered. Plus, the Photoshop guys have spent a LOT of time optimizing this scenario and they are paid specifically to make that stuff awesome. We just can’t compete with that, and it has never been a goal of ours to do so anyway.

* Paint.NET currently has a reputation for starting up really fast. This is a result of applying classic optimization strategies. Namely, you can do "nothing" really fast. Everything in Paint.NET is either lazy or defer loaded. Even the strings for menu items aren't loaded until you actually click on a menu. Plugins are not discovered until after the UI is shown, etc. I spent a lot of time profiling and tweaking the startup code path to eliminate as much code, memory use, and dependencies as possible. We also use ngen at installation time (“Optimizing performance for your system…”). This is the type of performance that is definitely NOT achieved "out of the box," and is something you would probably need to spend a lot of time on for any non-trivial application regardless of what language or framework you are using.

* Someone mentioned that Paint.NET is a “young” product. This is also true. Keep in mind that comparisons between Paint.NET and GIMP or Photoshop, or *any* commercial imaging application, are going to be weirdly lopsided. On Photoshop you have a team of developers who are 100% focused (both time and monetarily) on making Photoshop rock. With GIMP you have a ton of contributing developers from all over the world. On Paint.NET you have about 20% of my time, and hopefully 5% of Tom Jackson’s time, and that’s it. When taken in that context, I personally consider Paint.NET to be a phenomenal accomplishment. The download rates concur with me: users like Paint.NET.

Now, in my opinion and experience, .NET and WinForms are very well suited for writing desktop applications in Windows (remember, full disclosure: I work for Microsoft). However, you still need to know what you’re doing and spend the right amount of development effort in order to write a very professional, very performant application. This is true no matter what language or framework you’re using. For some applications, performance is less of an issue than just getting the darn thing to work (please remember those LOB apps), and I’ve found WinForms to be exceptional for those types of apps. In my experience, C#/WinForms is many many times more productive than writing unmanaged/native UI code (C++ and ATL, I'm glaring at you in specific). As for a comparison to Delphi, I don’t know. I’ve never worked with it.

-Rick Brewster (author of Paint.NET)
Rick Brewster Send private email
Friday, March 23, 2007
 
 
Rick, I didn't mean to imply that unsafe was necessary in the plug-in, just that it was possible and that it would raise concerns over security when using third party plug-ins. It appears that you have given this some thought too.

I noticed in your code that you make much use of GC.Collect(). In fact in your utility routine you call it twice as if one pass isn't enough. My guess is that if you didn't call Collect that your application would soon start to hog memory. I am unable to compile Paint.NET with a standard VS2005 Pro install so I couldn't test this. One of my points is that optimization of .NET programs is not as simple as it looks, and that the garbage collection is not as automatic as advertised.

Otherwise, your paint program is very impressive given the fact that you don't drop into native and use MMX or SSE instructions like most of the other commercial programs to gain speed advantages.

I fully agree that for most applications that .NET offers many advantages such as better type checking and array access checking. Delphi also has strong type checking and also has array bound checking as an option for Win32. I'm hunting down a heap corruption bug right now in a big C++ program and I understand the usefulness of .NET, but for my applications I need the speed and full control of memory that only C++ provides.
James Gibbons Send private email
Friday, March 23, 2007
 
 
There is a reason that we call GC.Collect, and a reason that we call it twice (in Utility.GCFullCollect). FOr one, it's a matter of user perception. We do a lot of operations that, when they are finished, have produced a lot of garbage on the heap. For instance, history items always write their data to disk BUT they are held on to via a WeakReference as well (it's essentially a caching system). If we don't force a garbage collection, then users will think that the application is using tons and tons of memory and has a memory leak (if you have 2GB RAM, the GC may not care if you're using 20 MB of excess). Every time a row is added to the history, we perform a full garbage collection. So when you do something like apply an effect on a 1024x768 image, we will have 3MB on the heap that isn't really necessary to keep around anymore (the 'old' contents of the layer surface). Since the garbage collector is not extra-process aware, we don't want Paint.NET's memory use to stomp on other applications even if keeping those history items in memory would have a performance benefit for us.

Paint.NET itself would [probably] not run out of memory if we didn't call GC.collect (or rather, if we didn't call GC.Collect it would not cause any additional out of memory situations). But it would have a much larger impact on system-wide performance.

The reason GC.Collect is called twice is that the first call will collect all non-reachable objects. Finalizable objects (think of IDisposable), however, must first have their finalizer called before they can be collected. So, we call WaitForPendingFinalizers so that all finalizable objects are finished finalizing. Those objects are then promoted to the next GC generation but NOT collected. Then, we call Collect again so that those finalized objects are collected. This is just an artifact of the way the garbage collector works.

"One of my points is that optimization of .NET programs is not as simple as it looks..."

I think I mostly agree, but I also think you could remove the words "of .NET programs" from that sentence and it would be equally as true. Part of the issue is that it's very easy to write non-performant code in .NET. The reason is that even expensive things can be done in a few lines of code: I'll just grab that XML deserializer, toss some text at it, and viola I have a reconstructed instance of a 500 KB data structure. That's only a few lines of code, but hardly something that happens very fast. This is both a blessing and a curse: you can get stuff implemented really fast, but then sometimes it really comes back to bite you. Over time you develop a cost model for these things and can steer your implementation accordingly. Paint.NET doesn't use XML for the update manifest pulled from the website because that is a task that runs right at application start, and System.Xml.dll is pretty heavy. But, for saving/loading of data for the file type configuration tokens ("JPEG quality = 50"), I do use XML serialization as it's just simpler when someone looks in the registry and goes "wtf is that ... Oh, looks like XML. And it looks like some JPEG configuration thing. Ok, crisis over."

"... and that the garbage collection is not as automatic as advertised."

I think it just isn't 100% perfect for 100% of scenarios. There will always be times when the GC just can't be as psychic as we'd really like it to be, and thus calling GC.Collect() is warranted. One problem with GC for desktop apps is, like I mentioned above, that it is only concerned with local memory use and doesn't optimize itself while taking into account system-wide memory use. Developers must also be disciplined to call Dispose() on IDisposable objects except when the ownership and/or lifetime of that object is complicated.
Rick Brewster Send private email
Friday, March 23, 2007
 
 
Concerning large images and performance, I wrote a paint program several years ago for internal use. Call it Paint.BCB6 because it started out on C++Builder. It makes use of the Matrox Imaging Library for all image processing because I didn't want to write my own MMX/SSE code and we would have been a year later to market with our commercial project. Paint.BCB6 also keeps the image in memory in 24-bit RGB format and copies it to a VCL TBitmap object for screen viewing with proper gamma corrections. I use the standard Canvas.Draw to draw the TBitmap onto a TPaintBox on the screen. When zooming is needed I use Canvas.StretchDraw. The JPG image referenced above is 13500 by 9000 pixels and needs 355,958K of memory in 24-bit format. My application loads it and can scroll around very quickly at 1 to 1 zoom and uses 732,208K of total system memory (for off and on screen copies). If I zoom higher and use the StretchDraw function I can't scroll as fast, but the application is still usable. If it were a commercial application then it would need some optimization on these large images.

First, you will need at least 1 GB in your computer to even try loading this image in either Paint.BCB6  or Paint.NET. I can fully understand why Paint.NET has limits as it keeps both an on and off screen copy too. I can also say that either Delphi or C++ Builder can easily work with these large images provided your computer has enough memory.
James Gibbons Send private email
Friday, March 23, 2007
 
 
Oh, and thanks Rick for the info on how/why you used Collect. It is always helpful to fully understand the details. I'm just a little concerned about using Collect when my applications must respond to real world events on the order of 5 milliseconds.
James Gibbons Send private email
Friday, March 23, 2007
 
 
The .NET 3.5 stuff ("Orcas") will have some properties in the System.GC class for tuning how the GC works. This includes a "LowLatency" mode, which is designed to be used for short periods of time and is a hint to the GC that you want it to chill out.

http://blogs.msdn.com/clyon/
Rick Brewster Send private email
Friday, March 23, 2007
 
 
Rick, thank you soo much for your detailed explanation on the implementation of Paint.NET

Hopefully this has cleared up some things.

Rick, you mention using a P/Invoke to call some Win32 services that were not available.  Are these services now available in .NET 2.0 or maybe planned for the next Orcas version?

From things I have seen in other projects as well is that Win32 API calls are still being used for stuff not supported yet in .NET or to get better platform support or speed.  So it seems Win32 will probably be around for some time.
Jeff C. Send private email
Saturday, March 24, 2007
 
 
Some of the things Paint.NET uses p/invoke for have been later introduced into the .NET Framework. I've noticed some and changed Paint.NET to use the .NET stuff instead, and others I probably haven't noticed. Some I just haven't changed because it was a very low priority compared to other work items.

For example, in .NET 1.1 there was a major memory leak in System.Drawing.Region::GetRegionScans(), but this was fixed in .NET 2.0. I'm still using my custom code for getting the region scans via GDI, mostly because enables several caching scenarios. System.UI.DrawThemedButton() could also probably be rewritten to thunk to the System.Windows.Forms.VisualStyles stuff, and then that method could be removed altogether, but I just haven't done it.
Rick Brewster Send private email
Saturday, March 24, 2007
 
 
Woops, System.UI.DrawThemedButton() should be PaintDotNet.SystemLayer.UI.DrawThemedButton().
Rick Brewster Send private email
Saturday, March 24, 2007
 
 
Rick Brewster wrote: "Everything in Paint.NET is either lazy or defer loaded. Even the strings for menu items aren't loaded until you actually click on a menu. "

and later...

"This is the type of performance that is definitely NOT achieved "out of the box," and is something you would probably need to spend a lot of time on for any non-trivial application regardless of what language or framework you are using."

I found myself smiling when I read this, mainly because of how you can say something that's technically true, but still conveys a false thesis.

The point here is that he's admitting that he had to put a fair amount of effort into making paint.NET's UI snappy and responsive.  He also claims "we all do it no matter what, so no big deal".  The problem here is that with .net, you have to start optimizing a lot earlier in the process than you do with native code, because .net shows is slowness so much sooner.  You don't have to build much of an app at all before you are cringing at the slow rendering speed and general lack of responsiveness.  With an app written in, say, Delphi, you may also get to a point where you are lazy loading modules, and what not, but not loading menu strings until you click on them?!! In 12 years of Delphi development I've never had to go to that extreme to squeak out performance.  A Delphi app can get big and slow just like any other app.  The difference is the app can get a lot bigger than a comparable .NET app before you say "Okay, time to start optimizing". 

I was looking for ways to optimize my WinForms app within a week of beginning work on it, just doing plain vanilla WinForms stuff.  I wouldn't get to that state in a typical Delphi app until months or years after development started, because it's snappy OUT OF THE BOX, whereas .NET is SLOW out of the box, and sends you scurrying for help sooner.

Even if you can eventually somehow get comparable performance issues, the question is, how soon in your development did you have to start thinking about performance, how much TIME did you spend having to tweak performance to get it acceptable, and how big did your project get before performance became an issue.

"As for a comparison to Delphi, I don’t know. I’ve never worked with it."

That was very obvious to this Delphi programmer long before you wrote this -- you should have put in this disclaimer when you claimed that "you would probably need to spend a lot of time on for any non-trivial application REGARDLESS [ed] of what language or framework you are using."  I knew from the moment you wrote this you had no Delphi experience.

Randy
Randy Magruder Send private email
Thursday, March 29, 2007
 
 
"Plus, the Photoshop guys have spent a LOT of time optimizing this scenario and they are paid specifically to make that stuff awesome."

You know what, I simply don't believe you. If the gaussian blur in Paint .NET infact CAN be made as fast as the gaussian blur which you see in Photoshop and other apps, I challenge you to implement it in a way which shows us that it can.

I just fired up the free-pascal application Pixel, which was mentioned earlier in this thread, and I did a 100 pixel gaussian blur on a 1600 by 1200 image. In Pixel the operation took roughly 5 seconds.

Then, I fired up Paint .NET and performed the same operation. Guess what? It took 60 seconds.

I think.... if Microsoft have the least bit of hope for .NET to be adopted as a serious desktop application development platform, they ought to at least put out one or two applications wich shows that it indeed is possible to achieve general performance which is comparable to that of ordinary non-managed applications. If that's never done... and if all we see is toy applications like Paint .NET, then I for one will certainly not consider .NET as a serious platform for creation of processing intensive desktop applications.
O Garfunkel
Sunday, April 01, 2007
 
 
I was invited on to this thread to discuss how Paint.NET was implemented, not to have a lightsaber fight about which development environment is the bestest or the fastest. But now you've warped what I've said and I'm just being met with flames and attacks, so I guess I learned my lesson. Goodbye.
Rick Brewster Send private email
Tuesday, April 03, 2007
 
 
I have no idea whether the 'warping' comment was directed at me or others.  If it's me, I would hope that Rick would explain to me how I've warped what he said, as I was careful to look at what he said in context and respond to it.

Really, the thread is NOT about Paint.NET.  It's about ".net desktop application performance vs. native apps".  Paint.net was simply put forward as a 'best of class' app for .net to show its stuff.  It is NO insult to the paint.net developers if .net has failed the test here.  I am sure they are talented and skilled developers who did a great job.  The question here is whether or not .net is a good choice for a responsive desktop application, and I have yet to see anything in this thread that would contradict my experience with .net dekstop apps, that they are sluggish and unresponsive compared to native apps, that they generally take longer to load, and are easy to pick out of any lineups of apps.

.net has really turned into Java repackaged by Microsoft.  There are valid reasons to use .net.  Asp.net is quite popular as a server platform, just as Java has thrived as a server-based platform agnostic language.  But for writing snappy windows apps that don't need to be fine tuned just to convince people they aren't MS-labeled Java apps, .net has thus far failed to impress.

I think the point has been made here that while Delphi is and has always been written in Delphi, Microsoft doesn't eat its own dog food when it comes to .net.  .net development is something they push on us, but you won't find their flagship products, from operating systems to developer tools to their office line, using it as anything more than an accessory. 

I also have yet to really see any evidence of the kinds of massive productivity improvements that Microsoft promised with the .net platform.  Maybe such improvements exist when compared to MFC and ATL development in C++, but for your average Delphi person the improvements are not so dramatic, and the perforamnce hit and loss in productivity due to the need to performance tune the app more than makes up for any differences.

My opinion on this is always evolving and I am ever watchful for the time when it becomes irresistable for me to dive back into .net development, but right now from where I sit it sure is nice to develop apps that perform and act like Microsoft's NATIVE code apps without all the tewaking, tuning and apologetics needed to justify a .net approach.

Just my opinion.  I'm sorry if you think I twisted your words, Rick.  It certainly was not my intention, and if you can illustrate how you were somehow misinterpreted, I'll gladly apologize.

Randy
Randy Magruder Send private email
Thursday, April 05, 2007
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz