A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.
We've got a C++ API that allows you to instantiate some singleton resource using a configuration file. If you try instantiating two instances of that resources we detect this and return an error. So far so good :)
Now we've layered a Java API on top of this. We have two implementations, one making use of JNI (layered on top of the aforementioned C++ implementation) and one without JNI implemented purely in Java.
The problem arises in the JNI implementation when one creates a resource in the Java layer, lets it drop out of scope (so it's "dead" but not garbage collected) then tries to instantiate a second resource. The C++ layer thinks the user is trying to create two instances while from the user's point of view he disposed the first instance before creating the second.
An easy solution would be to add a dispose() method to the resource ensuring immediate cleanup but this wouldn't translate well into the non-JNI implementation of the API (as it would do nothing).
Is there a way for us to refactor our code or design in order to solve this aforementioned leaky abstraction while retaining a consistent and meaningful API across JNI and non-JNI implementations?
For example, one might argue that we shouldn't be creating singletons the way we do in the C++ layer, using createFoo(configFile). The problem is that the API allows different implementations of Foo. Some are singletons (reside on embedded devices such as cell-phones) while others are not (residing on remote network servers). Someone is going to be shipping our API with *some* implementation under the hood and we expect the application code (written on top of our API) to remain the same regardless of which implementation of Foo (singletons or not) is shipped with it.
Tuesday, June 05, 2007
Can you prevent the Java resource from going out of scope? Hold it in a Java Singleton or just a static variable?
Tuesday, June 05, 2007
Is the class a singleton or not? If it is, there's a bug in the Java code that you need to fix. Also, singletons usually have an accessor function, which yours doesn't seem to - it looks more like a factory function.
You need to decide on the interface and enforce it - it doesn't matter what the calling language is.
The problem is that I don't know ahead of time whether the resource I am requesting will be a singleton or not. On some platforms it will be while on others it will not.
I fully agree with you that if I know ahead of time that I am dealing with a singleton then I should modify my design accordingly but given the fact that I don't have this option is there anything else I can do?
Tuesday, June 05, 2007
You haven't given enough concrete information to help you.
Singleton is a creation pattern, so "I don't know ahead of time whether the resource I am requesting will be a singleton or not" doesn't make any sense - your design dictates if it is a singleton. If you've already created it, you return the existing one.
If your factory function can fail with a "I already created one" error, it sounds like a hard interface to program against unless you also provide a way to get that instance.
You made trouble for yourself when you decided that the second attempt to create the object should return an error. That behavior is fundamentally incompatible with normal objects. The more useful pattern is to have a factory that creates the singleton if it doesn’t exist, or returns a pointer to the existing one if it does.
If you really need to ability to destroy the singleton when no one’s using it, add a reference count, but that’s usually overkill in practice.
Of course, if you have code that *depends* on getting the error if you try to construct a second instance of the singleton, this won’t help you.
Tuesday, June 05, 2007
Brian and Skorj,
I think I better give you a more concrete example of why I can't do what you say.
I am building an speech recognizer API that is meant to run both on cell-phones and on desktop machines.
In one deployment (cell phone) memory is tight and speech recognizers tend to not be thread-safe so you end up with a singleton instance of the speech recognizer. On the other hand, if you deploy the same API against a desktop machine there is a need to be able to create multiple recognizer instances. Whether I end up using a singleton or allow multiple instances is really decided upon at deployment-time, not design-time. As such I can't very well design the API around a singleton getInstance() method, because other platforms will allow multiple recognizers. Yet if I use createRecognizer() the problem becomes that I need to return a failure when someone tries to create two instances.
I think that either you are missing the whole point on singleton/factory pattern or we are not seen something that prevent you to use it.
As Brian and Skorj already said, just put a static method getRecognizerInstance() somewhere in your library.This factory method can decide what to do depending on the environment.
In the cell phone, the first call will actually instantiate the recognizer and store it in a static variable. Next time, this method will return the same instance.
In the desktop, this same method will instantiate and return a recognizer each time you call it.
Does this solve your problem or there is anything that prevents you to use this approach?
P.S. I've found some nasty problems with the simplistic factory method pattern that I described in multithreaded applications, but there are some ways to do it properly.
I suggested as much to my co-workers but they argued that if we were to go the getRecognizerInstance() route then your application would lose portability when moving from one domain that only allowed one recognizer to another domain that allowed multiple instances, and back.
If you first code your application against a singleton model and assume that getRecognizerInstance() will return the same instance if you invoke it a second time then this will not be true in the domain with multiple instances.
On the flip side if you code against multiple instances in one platform and move to another which only supports one you will run into threading issues.
The reasoning goes that at least with createRecognizerInstance() you're guaranteed that you'll always get back a new instance or a runtime failure early on. This is preferable to having a race condition failure occur later on if you were accessing a singleton from two different threads.
Does that make sense or did I miss something?
there is something that I don't understand from what you expossed: why should the application know if there is a single instance or more than one?
Maybe thisis the "real" problem you have. Let me explain. I don't know much of your application, but from what you explained, I understand that you have that recognizer objects that are provided by the library. The application is trying to create instances of this recognized to process some input, but the application doesn't know how many of those recognizers can be created.
If this is the case, maybe your problem is not with the factory/singleton but with the rest of the application. Maybe you should consider some ways to isolate the application from this platfor specific issues.
For instance, you could consider to use a Command pattern to encapsulate the requests from the application to the recognizers as an object and have a Proxy object that receives this requests and distributes them among the available recognizers. In one case, there will be just one, in the other case, there will be more than one, but this is transparent for the application (I'm omitting here how to create the recognizers, this might be a configuration option or something)
Does this as something that you could use?
I think you might need a higher abstraction than the one you're suggesting. It should handle getting a new recognizer instance and using it in the one case, versus waiting for the single instance to become free, locking it, using it and unlocking it in the other.
Think of it like a thread pool. You don't care how many threads are actually in the pool; you just queue up jobs and they get run. The thread pool implementation worries about how many threads the platform can support.
The approach you mentioned would probably work well against a synchronous API. Unfortunately for us "the great overlords" of management forced us to design a fully asynchronous API for the recognizer.
Don't be afraid of the great overlord of design or any other angry divinity ;-)
The approach I sugested would work perfectly in the asynchronou case. just put a call back in the command object to inform the invoker of the completation of the request.
By the way, the thread pool suggested by Brian is a well known case of the command pattern.
+1 Pablo for recognizing the multi-threaded issues.
You can probably dodge the "no synchronization objects" barrier by using InterlockedCompareExchange() instead of a critical section or mutex or the like, since your only testing and setting a pointer here.
On a single processor machine using InterlockedCompareExchange() has almost no performance cost, as it just does processor synchronization not thread synchronization - that is, it doesn't ever block on another thread.
Only problem I see is that I'm not sure that InterlockedCompareExchange() is available on WinCE.
Friday, June 08, 2007
This topic is archived. No further replies will be accepted.Other recent topics
Powered by FogBugz