The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Preventing total failure - Structured Exceptions in Linux?

Is there a way I can emulate Windows-like C++ Structured Exceptions in Linux?

It seems that Wine does some magic to cause signals like SIGSEGV, SIGABRT etc., to be treated as C++ exceptions rather then killing the process and dumping core. I could delve into the code and use it in my Linux programs, but I was wondering if a more graceful and well-tested solution is already out there!

Thanks in advance!
Leon
Sunday, February 05, 2006
 
 
Probably the easiest way would be to catch the signals and queue them up for feeding into your error system.

I've got some methods which I use for wrapping up system calls which go off, call the system call, check the result and if it's duff, throw a suitable exception by looking at errno. (And some descendent classes for looking at calls to other systems like OpenSSL which don't use errno).

I'd have those check the signal queue first -- I guess there are similar places you could integrate pulling results off that signal queue.

Having said that; you really want to having a good look round as to why you're getting SEGVs. Programs in normal operation shouldn't be generating them (unless you're in ObjectStore worlds in which case -- have fun!)
Katie Lucas
Monday, February 06, 2006
 
 
I gotta see how the thing works!

As developers of course I want the software to crash hard in case of an error, but the powers that be don't want the system to ever die.
Leon
Monday, February 06, 2006
 
 
"As developers of course I want the software to crash hard in case of an error, but the powers that be don't want the system to ever die."

The powers that be would rather have the program continue with a corrupted internal state?
clcr
Monday, February 06, 2006
 
 
I hate to agree with the powers that be but they have a point.

Of course, it all depends on the type of application. A business critical app should simply die with a huge error message rather than continue to corrupt data under the covers. But a consumer mp3 cataloging app should probably never just die. It would be better for it to occasionally just quietly lose data. Users will consider the data loss a small bug and will accept it. But if the program dies with a massive error message then they will consider it unstable and dangerous.
Turtle Rustler
Monday, February 06, 2006
 
 
As you already know, they're called signals. If you have Powerpoint, I think http://debryro.uvsc.edu/3060/slides/Unix_Signals.ppt is a pretty good introduction to them (scroll past the sections on synchronization and deadlocks.

Advanced Programming in the UNIX Environment (Stevens and Rago, 2005) talks about signals in much greater detail and is considered to be one of the reference works on the subject.

For your particular flavor of Linux, you will need to read the man page on the specific functions such as sigaction().
TheDavid
Monday, February 06, 2006
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz