The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

HTTP Application security

The product I've been working on for the better part of 2 years (I know it should only be 6 months in the first revision), is getting closer to seeing the light of day.  I've been slowly, but slowing moving my blog in that direction, but since I've posted here a bit over the past few years, I'd kind of like to run it buy you all.

I am working on a web application security product.  It turns out this is a lot more complicated than I originally thought.  I could have easily used another 3 people to help out, but I'm pretty excited about what I've accomplished so far.  Half the battle was getting started, the second half is finding some interested parties to help drive the requirements.

I am curious if anybody here is currently using any web application security products -- products that inspect HTTP traffic before it is handled by the web server (apache, tomcat, IIS, etc.).  The market is pretty new, so I suspect most are not.  Also, how are you currently handling HTTP validation?  What I mean is how do you check which parameters you are going to accept or reject in your application?  Also, is security really a concern for your applications?  For almost every developer I talked to, security seems to be something that is thrown on at the end.  Most don't really consider security until they've been burned by it.  I was burned pretty bad (had to rebuild a server farm of ~20 servers) 5 years ago, so I learned my lessons early.

Here are my thoughts.  Web sites should not allow origin web servers to handle invalid HTTP requests if at all possible.  Validation should be dealt with in the DMZ.  All requests to public web sites should traverse a proxy which inspects the HTTP traffic, before it is handled on a third trusted network.  If the request doesn't met a specific set of criteria (specifying that criteria is the hard part) let pass it to the server for further processing, otherwise drop it.

This has multiple advantages. 

* It makes it clear what the security rules are for the web site, as they are separate from the application.  The rules can start very restrictive, and be relaxed as requirements are added. 

* The rules themselves provide external documentation for the application.  They specify the "interface" for the application at the network protocol level.

* It is less likely that the origin server will be compromised if requests are rejected before being processed by the server.

* It can help alleviate DoS attacks as the proxy is optimized to handle invalid requests, where the server is optimized to handle valid ones.

* The origin server performance can be improved because cycles aren't being wasted on bad requests. 

The biggest drawback is the overhead in specifying the rules.  Every developer I talked to thought this would be a show stopper, even after I explained that they had to do it anyway in their application.  It seems many developers I just too busy to consider dealing with another application layer.

My feeling is that the industry is moving in this direction at the high end now, although the concept seems foreign to most application developers.  I'm hoping to hook my boat to a security vendor that needs to get into HTTP.  And if that doesn't work out there is always the one off route, and then Open Source.

Thoughts?
Christopher Baus Send private email
Wednesday, December 01, 2004
 
 
Filtering firewalls are not a totally new thing, although i've not seen any that particularly focus on web services and  that kind of rules definition, so it looks interesting. I'd be curious as to how well it would stand up to a heavy DoSing, as the vendors i've seen marketing DoS protection as a filtered firewall appliance all sell hardware filtering solutions as they claim that a software solution wouldn't be up to it. This could of course be self interest on their part! I would think that the other other major concern besides the complexity of the rules configuration (if a company really does need it, and it's not too arcane, they'll do it anyway) is how well it scales?
Andrew Cherry Send private email
Wednesday, December 01, 2004
 
 
I'm getting a sense of deja vu. Did you post about this a year, or maybe two, ago?

Wednesday, December 01, 2004
 
 
There is a possibility I did post about this, but I've finally got the core of it running pretty well, and I'm looking for some feedback.  It is pretty difficult to build something like this in your free time, although it might be more difficult to explain it.

Many people complain that engineers always under estimate the problem at hand.  A friend of mine has a theory about this.  He figures that if we didn't, nothing would ever get done, because we'd be over whelmed and would talk ourselves out of it.  Sometimes you have to take one small problem, and just do it.

I used to think that not being able to develop the software quickly would put me out the game.  What I'm find is the market is developing at a snail's pace, which is amazing consdering the threats to applications is increasing, not decreasing.  It can be surprising how long these things can take.
Christopher Baus Send private email
Wednesday, December 01, 2004
 
 
It sounds interesting, but I'm not sure I understand what sorts of attacks this is supposed to prevent.  When you say " invalid HTTP requests", you mean that the requests are semantically valid (as HTTP) but violate some business constraint?  Like the UI could never generate that request?  I can see why the rules would be painful to generate.
Is this an alternative to validating input in the application?  Why is it better to do it this way?
Brian Send private email
Wednesday, December 01, 2004
 
 
Brian,

Thanks for your question.  Every one helps me improve my story.  I’ve had this question before so it is best that I enumerate them going forward, and I will clean this up and put this on my web site.

Types of invalid requests an application proxy can reject.

* Invalid HTTP requests

This one is simple.  If the request isn't valid HTTP, reject it.  This prevents the core web server from having to handle the faulting request.

* Valid requests that break constraints of the HTTP implementation

There are issues with the HTTP specification.  Primarily there are no constraints on any parameter lengths.  For instance, according to the spec, a message chunk can be infinitely long.  It is up to the implementer to specify these constraints, but many do it incorrectly.  I’m convinced this is the reason there have been so many buffer overflows in HTTP implementations.  For instance Cisco had a buffer overflow when the chunk size exceeded 2^32.  My proxy allows users to specify what constraints they want, for instance maximum number of headers to accept per request, and rejects the requests before they reach the web server if they break these constraints.

* Valid requests that are known hacks

For instance are you sick of seeing ..\..\..\winnt\cmd.exe in your log files?  The proxy can filter this before reaching the web server.

* Application specific validation

This is the biggy.  What I am working on is inspecting HTTP application parameters.  Basically everything in POST data or after the “?”  For instance:

http://www.joelonsoftware.com/?foo=bar

This request should be rejected, as it isn’t an expected by the application.  I can only be making a rogue request.  Yet this makes it all the way to the web server, and the server even generates a response.  A response for a request that the application should recognize as invalid.

The goal here is two fold. 

1) stop SQL injections and XSS attacks before they reach the application server. 

2) Reduce load on application server by rejecting rogue requests at the network edge

The side benefit is that the rules are self documenting.  Meaning the HTTP “interface” for your application is well understood.  Half the battle of security is understanding what the hacker can and can not do.  Currently the validation rules are often mingled and hidden with application code.  It makes it extremely difficult to expose those rules.  The primarily goal is to only allow users to send parameters that the user has explicitly exposed.  That is were the major shift in thinking comes from.  My job is to convince the developer to think about what parameters his or her application must handle, and document them as a rule set.
Christopher Baus Send private email
Wednesday, December 01, 2004
 
 
I am finding it difficult to understand the scope of your product...

With regard to DoS, other than the "slashdot effect", I am not aware of any other DoS strategy that uses HTTP. Does this really happen?

Wouldn't HTTP would be a terribly inefficient and ineffective DoS strategy.
Robert Charon
Wednesday, December 01, 2004
 
 
I don't know...  I'm not sold (but I'm probably not the target customer anyway).  It would be one thing if it were easy to get a description of "valid input" from a script.  But if I have to comb through my scripts and figure out what constitutes valid input...  ick.  Do you have a way to automate this?  How robust is it?  Is it better than just grepping for (e.g.) 'Request("[^"]")'
Brian Send private email
Thursday, December 02, 2004
 
 
> There is a possibility I did post about this, but I've finally got the core of it running pretty well

I wasn't complaining, just wondering whether my brain was farting on me (again).

What made me think this was this paragraph:
If the request doesn't met a specific set of criteria (specifying that criteria is the hard part) let pass it to the server for further processing, otherwise drop it.

I think I mentioned at the time that I thought this was backwards, and that you should check whether the request matched validity rules rather than t'other way about (it makes it more future proof against currently unknown attacks).

Thursday, December 02, 2004
 
 
> Wouldn't HTTP would be a terribly inefficient and ineffective DoS strategy.

Actually it is quite the opposite.  An attacker just has to prevent legit users from accessing the system to successfully mount a DoS attack.  This depends on the application, but this is often the best place to launch a DoS attack (if I were take off my white hat and think like a cracker for a minute).  If the application can be tied up while handling invalid requests, a DoS attack can easily be mounted over HTTP. 

For instance if you were going to DoS attack ebay would you try to go after their routers or their web servers?  The routers (and TCP/IP firewalls) are highly optimized to handle a very specific type of traffic.  The protocols are well understood, and in production everywhere on the net.
In contrast, the HTTP layer contains all kinds of custom application software that reads and writes to databases, handles complex state, etc. 

Application proxies can fend off these types of attacks by not involving the application in HTTP validation process.  Application proxies are built to do one thing, and do it well.  Inspect HTTP traffic and accept or reject requests by appling a rule set.  For instance I use a single threaded, event driven, architecture which is very scalable, while almost all applications use a multi-threaded, blocking architecture.  Obviously this can't be 100% effective.  There will always be some traffic that will pass the application server, but the goal is to reduce this as much as possible at the network edge.
Christopher Baus Send private email
Thursday, December 02, 2004
 
 
>  But if I have to comb through my scripts and figure out what constitutes valid input...  ick.

This is a good observation.  I've been thinking long and hard about this.  I had a few long discussions with some developers after I presented this concept to them last month.
Christopher Baus Send private email
Thursday, December 02, 2004
 
 
I realize that your app is agnostic with regard to the particular dynamic content it is filtering for (ASP, PHP, JSP, cgi).  But for creating these rules, it might make more sense to target particular technologies.  E.g. have a module that can load up a JSP and try to determine keys and values that it is expecting in the input.  Then maybe present the user with a form where they can flesh out what you have determined from the type system.
Brian Send private email
Thursday, December 02, 2004
 
 
> have a module that can load up a JSP and try to determine keys and values that it is expecting in the input.

Yes exactly.  That is an option.  I think you are seeing my problem, which gives me hope that it is possible to explain this product. 

My goal is to agument application security best practices with software. 

The ruleset creation is the biggest obstacle for adoption.  I've got so many ideas here, I wish I could split myself in two.
Christopher Baus Send private email
Thursday, December 02, 2004
 
 
I think it's an interesting idea. 

I'm not sure I understand it completely though.  So, you (Mr. Web Applicationer) build your app and deploy it.  Then you set up a list of 'rules' to allow/probit certain types or requests.  One example might be prohibiting any requests that have '<script>' in the querystring or post data.  Sort of like what ASP.NET introduced in 1.1 with the validateRequest attribute?

I think the HTTP protocol is well defined enough that you could come up with a bunch of rules for HTTP, then make specific rules for specific operating systems, web servers, and web application languages.

In my example though, you might acutally want '<script>' to be allowed through for a specific web page that allows users to edit the content on their own blog.  It's application and even page specific.  I guess in that case you could specify an exception based on a specific URL?

I'm still pretty sure that I'm not seeing the light at the end of the tunnel.  Please continue explaining it.
GuyIncognito
Thursday, December 02, 2004
 
 
A few years ago I used a product called (I think) Sanctum AppShield which does roughly what you're describing.  It sat in front of the web server and validated all the HTTP requests before they got to the server.

The problem with it was what you're already talking about and the classic security product problem: the quality of the security is inversely related to ease of use.

Going through every nook and cranny of our elaborate web application and defining rules was a daunting task.  Additionallym the people who were best equipped to do this were the good developers of the web application, and they were already pretty busy (and relatively expensive).  So what happened is exactly what you'd expect: we set up some very basic rules that applied to the whole site to justify the product's purchase, and it's been running that way ever since.

All of which is to say, if you can come up with some kind of crawler that actually works and builds high-quality rules, that would be a killer feature.  Keep in mind that the biggest advantage of a web application is how easy it is to deploy updates.  A good autmoated rule generator will make it easy to update an existing set of rules when the app changes.
Ian Olsen Send private email
Tuesday, December 07, 2004
 
 
Ian,

I am familiar with sanctum's product.  They are the biggest player in this market.  I really appreciate your input as a user of their product. 

What you are describing is exactly what I have run into when presenting this to other developers.  My feeling is that the best way to approach this market is to find those who are really, really concerned about security first. 

I understand how important the UI is.  Joel's always right.  Usability really matters.  The UI should drive the security *process*, and not just the security software. 

Documenting which parameters and values are accepted by an application is crucial to auditing its security, yet application parameters (and their accepted values) are almost always locked up in the application source code.  It is my goal to change this.
Christopher Baus Send private email
Tuesday, December 07, 2004
 
 
Microsoft ISA has HTTP filtering and it's used as a head for Exchange RPC over HTTP. It sucks. It is slow. Non-cached data can take 10+ seconds to load the first time. Part of this delay is due to ISA. I'm not sure what the requirements are that put ISA in front of Exchange for RPC over HTTP but you might consider looking at that market.

Sunday, December 19, 2004
 
 
Also in regards to automating it -- why not build the core rule set by having your platform in front of the HTTP farm in "rule set mode" so your customer can browse the site and perform the expected operations (of course they'll miss some) and build a rule set off of that?

That would be slick, make the initial rule set creation relatively quick yet also trade off a bit of the "grab 'em by the balls" approach to security. Yeah, it's not all that but examples are king and this would provide examples galore.

Sunday, December 19, 2004
 
 
Well, a lot of the functionality sounds exactly like what a firewall or HTTP proxy should do. All of the validation of the HTTP spec falls in here. Plus the addition of rules around particular parts (content length limitations, POST size, etc).

Validating for business rules is more interesting, but as you point out, more challenging. If you locked this down enough to be sure that -no- invalid requests got through, then any time a developer wanted to add a feature, or a new page, then the rules would have to be updated. I don't see static analysis of the code helping much here; too many dynamic possibilities. Some sort of annotation in the source done by the developer (in Java, annotation support would be obvious for this, some other in-comment format for other languages) to define the necessary rules would at least consolidate them in the same place as the code. Then you'd run a scanner on the source file which would produce the rules for your software/device. But mapping those back-end expectations to front-end URL's would still be a challenge.

Another question, do the developers rely on this validation alone for validating requests? This is just one layer of protection. If you're following good security practice, then this should just be one layer of validation, which means you should still validate on the application side as well.

And finally, in most enterprises I've worked in, something like this would controlled by the network/security team, seperate from the application deployment/developmetn team. Having to involve yet another group for testing and deployment of an app every time you add a new page or request parameter to the site would be a major pain.

In short, it sounds like something I might consider using where security was absolutely, absolutely critical, and then it would become one part of my defense in depth. If security wan't absolutely critical, then I'd probably just validate on the server side since it would be cheaper and only slightly more risky.
Rob Meyer Send private email
Wednesday, December 22, 2004
 
 
Oh, and I was thinking about the "live" learning mode as well, but it's got a few problems that would make its use limited.

Primarily, it doesn't tell you what's expected, unless you use the application in a very particular way. If someone posts a "create user form" with this post data:

username=foo&password=bar&passwordverify=bar&email=foo@example.com

Nothing about that tells you that the username is restricted to 8 alpha-numberic characters, the password is 8 characters, etc...it would work okay for URL's, and maybe request parameters that were part of application behavior rather than exchanging data with the client (/picture.jsp?id=xxxxx), but not much else.

Maybe a naming convention could work; encode the validation rules and append it to the field names; (instead of username, username-A3DB, which could be a rule for 8 characters, alphanumeric, non-empty). That would require application changes to the field names, but might be the most workable idea. Maybe make it an ID that you append, and then you can build your own custom rules, or use a wide variety of off the shelf ones.

Kind of a maintenance nightmare though...
Rob Meyer Send private email
Wednesday, December 22, 2004
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz