A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.
I just started reading about Unit Testing. I would love to have automated tests for my projects. However, I'm wondering what other folks out there do for database applications.
For example, I have an object that adds a user to a security role, which ends up updating a role table in my database. When I write my unit test, would I have the final check look in the database to see if the update happened, or would I have it check the object's role status, or both?
Friday, March 03, 2006
Whatever it takes. If you can check the raw record and you know that all the DAO/DAL on top of it work, there's no reason to test those. Of course, the only way to *know* those are working is to have testing there too...
If you can test the aspects separately, do so.
Friday, March 03, 2006
You might want to check out this site:
You might also want to check out the method used by RubyOnRails, which involves having a "testing database" that is separate from your development and production db:
The question, zig zag, is what set of circumstances would make it certain that the test had passed (or failed)? That's what you have to test.
Database testing is a tough area in automated unit testing. Tests against databases tend to be slow, but automated unit tests are supposed to be very fast. They also tend to permanently modify the database, whereas automated unit testing is supposed to start with a known state each time. Short of a total drop-and-create, it's hard to guarantee the state of a database.
One thing you might try to keep things moving is set a certain amount of time you're willing to wait for a set of unit tests to run. Perhaps for a group of database-hitting unit tests, five minutes, or three, or one is a reasonable amount of time to wait. So go ahead and make the tests, but once they get longer than that amount of time, see what you can optimize or split into other groups of tests. Then once you have the time reasonable again, proceed with writing more tests.
Here's a page that talks about one way to do unit testing against a database in .NET:
I haven't tried it, but it looks interesting...
Regardng Kyralessa's concerns:
1. It's a rash coder who would run unit tests against live data. In any case the utPlSql standard is to have a setup and a cleanup procedure defined in every package; ideally the state after the test is the same as it was before.
2. Speed of tests. Many of the most important operations to test will complete in seconds rather than minutes.
> They also tend to permanently modify the database, whereas automated unit testing is supposed to start with a known state each time. Short of a total drop-and-create, it's hard to guarantee the state of a database.
Run your test against a *test* database, not a live database. I'ld vote for make it impossible to connect to a live database from the development machine just to ensure this.
Then do a total drop-and-create at the start of each test cycle.
Sure, it takes time, but, um, so? All the seconds that you save by not testing will be made up for by the hours spent debugging after flawed code gets into production.
Besides, if you have a well-layered system then most code will talk to an abstract interface, and when testing it you can use mock objects with a suitable in-memory system instead of a connection to a real database.
In reality, I'ld have a low level database system tested against a real database, but most of the real code wouldn't be tighhtly coupled to it, so could use a more suitable testing framework.
Sunday, March 05, 2006
+1 for restaging the DB between tests. Before each single unit test, not just once for all of them.
This gets slow, of course, but you might want to consider the solution Ruby on Rails uses: you can load all of the data, and then run each unit test inside of a transaction. The transaction is rolled back after the unit test completes (whether it succeeds or fails). This has the advantage of being very fast, because many DBs won't write anything to disk during that transaction.
Of course, you have the downside that you only have one set of data.
Another solution (which isn't mutually exclusive to the first) is to run it against an in-memory DB.
Anon to protect the guilty
Sunday, March 05, 2006
In my unit testing of databases, the number one problem was the state of the data in the database.
First, you shouldn't be running any unit testing on Prod. If you are concerned about something moving from a DEV environment to PROD not working, you have not done the homework for the push requirements in the first place.
Second, don't unit test against the same database on which you have developer changing the data. To maintain your sanity, run all the unit testing on your own database designated for the unit testing only.
Third (may/may not apply to OP), if you have both database, DAL, BAL and frontend tests, control the order from the bottom up. So when you run the tests do them in this order:
1. DB tests
That way, if something in the DB is screwed up, you won't have DAL, BAL and FE tests failing and not know why.
Fourth, (and I don't know if this is the best way) but when I last used NUnit, I created some classes that just inserted test data (where all the data was randomly generated). I then passed this back to the test, who did the testing and then did true deletes to the data. While this was abit of work, I only needed a copy of the empty database with lookup tables (state, etc.) populated, and the unit tests would do the rest.
Hope this helps!
(to directly address the OP question):
I would have a test to test the functionality of the object (add user to security role).
I would also have a test that the BAL layer did its job, the DAL, and the procs on the DB side.
Some could be omitted, but alot of the tests are not that long to do and running in the order I described in my earlier post will show you exactly where you have a problem.
Main business database rule, assuming contracted developer team: Developers don't have access to production database. Ever. Not even read-only.
Regarding testing... format + reload sounds nice but... how much time do you think you need for this on a 30Gb+ database? Testing in databases is hard hard hard, no doubt :)
Tuesday, March 07, 2006
Thanks to all for the posts. They have definately given me some direction.
Thursday, March 09, 2006
Regarding the article referenced above, I think using Enterprise Services is academic, because I know I wouldn't bother and don't think anyone else really would either.
Probably the best way is to have a script that populates test data and is re-runnable.
The other alternative is to use VMWare. I've done this before and it works great.
Wednesday, March 22, 2006
This topic is archived. No further replies will be accepted.Other recent topics
Powered by FogBugz