The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

TCP Binary Handshakes

I'm using a winsock-based system to transfer binary information across a network for a data-acquisition program.

I'm running into a problem where if a foreign client attempts to connect to the server on one of the ports, it can crash the server.

What's the best way to authenticate a client and verify that it is a valid program? Send a specified string across on connection?
Newbie Network Programmer
Thursday, February 09, 2006
 
 
The best solution is to fix the server so that it doesn't crash.  Anything else is an over-engineered bandaid that only attempts to hide the problem.
bmm6o Send private email
Thursday, February 09, 2006
 
 
I agree. Why does the server crash when someone connects to it?
anon
Thursday, February 09, 2006
 
 
Well, you end up with the following problem:

You want to send a package, you need to know how long it is, and a data structure for the package.

Currently I send a DWORD with the size of the package, and then receive until I get the entire package.

However, now when I go to decode the package, if it's from a valid client, I can unserialize the data into a workable form.

If it's junk data, I can TRY to unserialize it, but it may or may not fail at this serialization due to the assumptions of how the data should be formated.

What I'm trying to find is a way to say "this binary data is valid" or "I don't know whether this binary data is valid or not"

With strings it's easy to parse. With binary data, it's a little bit harder to figure out.
Newbie Network Programmer
Thursday, February 09, 2006
 
 
I see three approaches. You should probably do all three.

First, rewrite your deserialization code so that it fails gracefully instead of crashing. By "fails gracefully", I mean that your program's state should not change as a result of receiving a bad packet. This is an absolute neceessity for any network server.

Second, consider changing your protocol to include a checksum or some other quick test that invalid packets will fail.

Third, stop arbitrary clients from connecting, either by firewalling or by some sort of authentication scheme.
clcr
Thursday, February 09, 2006
 
 
Also start your packet with a magic number or code.  For example, any executable program on windows starts with 'MZ' (I think), so even if you rename a text document to text.exe, windows knows not to run it.
Grant
Thursday, February 09, 2006
 
 
"What's the best way to authenticate a client and verify that it is a valid program? Send a specified string across on connection?"

Bear in my mind someone, someday may want to crash your server deliberately. What is to stop them from reverse-engineering a valid client, learning how to fake being one and then send invalid data?

As the others said. First fix the "server crashes" part of the problem. An ideal network server doesn't crash even after visitation from a deity ;-)
Roman Werpachowski Send private email
Thursday, February 09, 2006
 
 
You have no choice regarding this but to make the server robust and capable of dealing with any kind of data if you are actually interpreting the data in the server. You need to have some indication in the message of the type of the message and validate every piece of data in the message that you use to make decisions in the server software.

Also the concern about clients that are possibly unauthorized trying to send data to the server is more of a network security issue which you must also address by defining and limiting, somehow, access to the server only by legitimate clients. If it's a connection oriented setup, then you could do some form of authentication before allowing access.
Grok2
Friday, February 10, 2006
 
 
Make "fixing the server crashes" your first and logical response to such a problem. Learn to deal with invalid data. Otherwise, resign.
Dave
Friday, February 10, 2006
 
 
"fail at this serialization due to the assumptions"

Bad data doesn't crash servers, bad assumptions do.  Never ever ever ever assume anything that you have the ability to verify yourself.  Valid data can have legitimate constraints.  Violations of those constraints should cause error conditions, not access violations.

Some of the other posts have been a little harsh, but seriously this is job #1 for a server.  If it can't detect bad data (either malicious or accidental) it doesn't matter what else it can do.
bmm6o Send private email
Friday, February 10, 2006
 
 
As others have said, make the server robust.

You need to carefully define the binary protocol.  You mention that text parsing is easy whereas this is not.  In fact, the two are almost equivalent.  The set of validity rules for your binary protocol are very much like a grammar, except that some of the rules may be expressing things that an ordinary grammar cannot.  (e.g. length of data, followed by data.)

You also need to account for reasonable bounds on the input.  e.g. if your packet calls for a DWORD length, followed by the data, then will your server crash if I send a length of 2^30-1?  The solution here is to specify bounds on length fields and any other required constraints.

Once you have a complete specification of the protocol you simply write a parser for it, and have that parser reject any non-conforming input.

This won't stop denial-of-service attacks, but I don't believe there is any way of doing that perfectly (i.e. rejecting DOS packets at any supplied level, while retaining all legitimate traffic).
David Jones Send private email
Sunday, February 12, 2006
 
 
To be fair to the OP, it might not be possible to change the server to be more robust. It might be completely out of their control, a third party product, a customer's existing product to interface to, or any other number of reasons.

However, if this is the case, the best you can end up with is a leaky abstraction. Yes, you should always ensure you never crash and exit gracefully, but if the server is just throwing garbage, there's only so far you can go.
Ritchie Send private email
Monday, February 13, 2006
 
 
I agree with the other posters - if it's your server software, you need to fix it to accomodate less-than-perfect input.  If it's not, can you determine the trigger(s)? 

For example, back in the bad old days I wrote an app that needed to talk to FedEx'x first generation routing information server (we were doing multicarrier shipping with a single workfloor computer/printer).  Their server crashed unconditionally if we did not explicitly disconnect, which the TCP/IP RFCs said is not required, and which the similiar UPS box didn't care about.  We changed our program to explicitly disco & the FedEx box was happy without hurting anybody else. 

FedEx fixed their software later, IIRC.
a former big-fiver Send private email
Monday, February 13, 2006
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz