Clemens has posted an interesting entry on the topics of Service Orientation, Service Autonomy and Caching here. It's worth reading. Several things caught my interest from Clemens' piece:
The Service Orientation Principles and Autonomy: Clemens comments that he was initially confused by the meaning of the word "Autonomy" as part of the key SO-principles, initially thinking it was related to the concept of autonomous computing. I admit I was confused by it initially, as well, but not because of the same reason as Clemens.
My initial exposure to service orientation was in the context of business applications that started exposing services as a way to make the functionality and data they worked on/with available to external application, not in the context of standalone services built from scratch to be "only services" (if that makes any sense). From that point of view, "autonomous" services as I understood the word initially didn't seem to make a lot of sense, up to a point.
However, if you consider Clemens' explanation that Autonomy == Avoid Coupling, the scenario I was refering to fits just fine within that context! More over, it definitely fits more cleanly with the rest of the set of SO principles (imho, at least) and fits more cleanly into my perspective of SOA as a sort of "design philosophy" of software systems. Of course, it also puts it in words you can explain, which seems to me the best thing :)
Some might argue that Clemens' example of two "independent" services (as seen from the outside) sharing a single data store is a "bad thing" because that sharing might [will] leak into your service definition somehow. I disagree. It is perfectly possible to avoid that situation. More over, if sharing a data store is bad, then I don't see how sharing anything else isn't equally bad. What if you share security mechanisms? Say, you have a single security repository for both services (or use a single-sign-on mechanism)? That leaks into your service interface? most likely. Is it a bad thing? Most likely not! Or, how about sharing a .net component you wrote as part of the implementation? Don't you run the same risks?
My point here is that this is why good design of the "edge" (i.e. the service interface) is so key to ensuring services are reusable. And that sometimes mean you need to be pretty hard about what can and cannot go into your interface, even if it seems counter intuitive, like avoiding depending on internal IDs or ensuring you ask for all the input data necessary to do a certain process even if your backend storage might already contain part of that data (and I have a pretty specific example where this really paid of, if anyone's interested).
Another thing I've gleaned from my experience (note I don't discard my experience is misleading me here) is that most services don't really exist in isolation. The big trick (and a large part of the SOA principles) is to make it look like they do. For example, many significant services might be supported by business user applications where at the least business-level configuration is done, and possibly much more. You might make it look like the service is really completely independent, in accordance with the principle that this is just an internal implementation detail, but the real truth is, well, it isn't, since the service by itself without the business-level configuration application would be pretty worthless.
Caching: Clemens makes some pretty good points about Caching in SO-scenarios. I think this is one of those issues where for many business-level scenarios (particularly intra-net scenarios versus wide internet ones) needs to be pretty well defined, and usually requires a clear understanding of the implications and consequences of caching data, as well as knowing which data can be cached and where. But it can also pay off in a big way. We had a case a couple of years ago where we cached the data returned from an external service (in a sql server database), and then used that data as input to an important business process that was exposed itself as another service to other applications. Not only was it key to providing better response times, but it also saved the business a ton of money because each query to the external service for the original data was worth $X money to the company, and thus could save on repeated queries of the same data (which turned out to be pretty common).
One important point with regarding to caching is that we as technical folks are not always the best people to decide on for how long (or even when) data can be cached in scenarios like this. Sometimes, you'll just want to let business users decide and configure the parameters for data caching and data retention based on their own expertise. For example, in the scenario I just described the company was actually assuming some degree of risk for caching that data (as it was actually cached for weeks or months), and so the business users were correctly positioned to determine how much risk the company could accept from using stale data ocassionally versus the money (and time!) they were saving by caching it in the first place.